S3-Episode 1: Predicting crimes before they occur: not so sci-fi anymore
While it may sound like science fiction, it’s actually happening today. Law enforcement organizations are using data to predict criminal activity before it occurs. Though predictive policing could make crime reduction more efficient, it also raises real risks to privacy and other human rights. Dr. Christopher Parsons, former associate at the University of Toronto’s Citizen Lab and now Senior Technology Advisor at the IPC, talks about some of those risks and how they can be mitigated.
Notes
Christopher Parsons is a Senior Technology and Policy Advisor at the IPC. Prior to joining the IPC in early 2023, he was a Senior Research Associate at the Citizen Lab, an interdisciplinary laboratory based at the University of Toronto’s Munk School of Global Affairs and Public Policy.
- Choosing to focus on research related to privacy, national security, and public policy [2:38]
- The modernization of policing through technology [4:57]
- Defining the term predictive policing [7:19]
- Bail assessments as an example of predictive policing [8:33]
- Potentially problematic aspects of predictive technologies [9:34]
- Findings of the Citizen Lab’s Surveil and Predict report [11:11]
- Privacy and predictive policing [12:20]
- Human rights issues associated with predictive policing [14:18]
- Key recommendations of the Citizen Lab’s Surveil and Predict report [18:07]
- The need for openness and accountability when it comes to the use of predictive policing tools [21:09]
- Future issues on the horizon related to law enforcement practices and privacy in Ontario [26:26]
Resources:
- To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada (Citizen Lab, September 1, 2020)
- ‘Algorithmic policing’ in Canada needs more legal safeguards, Citizen Lab report says (Toronto Star)
- Law Enforcement and Security Agency Surveillance in Canada: The Growth of Digitally-Enabled Surveillance and Atrophy of Accountability (Citizen Lab, February 26, 2018)
- Law Enforcement and Surveillance Technologies (IPC Privacy Day webcast)
- IPC Strategic Priorities 2021-2025
- Next-Generation Law-Enforcement (IPC resources)
Info Matters is a podcast about people, privacy, and access to information hosted by Patricia Kosseim, Information and Privacy Commissioner of Ontario. We dive into conversations with people from all walks of life and hear stories about the access and privacy issues that matter most to them.
If you enjoyed the podcast, leave us a rating or a review.
Have an access to information or privacy topic you want to learn more about? Interested in being a guest on the show? Send us a tweet @IPCinfoprivacy or email us at @email.
Transcripts
Patricia Kosseim:
Hello, I’m Patricia Kosseim, Ontario’s information and privacy commissioner, and you’re listening to Info Matters, a podcast about people, privacy, and access to information. We dive into conversations with people from all walks of life and hear real stories about the access and privacy issues that matter most to them.
Welcome listeners and thanks for tuning in. Science fiction fans out there may remember the movie Minority Report. It’s an action thriller starring Tom Cruise based on a story by well-known science fiction writer Philip K. Dick. In the film, police officers in the pre-crime unit not only serve and protect, they serve and predict using psychic technologies to stop crimes before they happen. By analyzing the visions of mutant precogs with the ability to see into the future, they collect clues to preemptively track down suspects. Set in 2054, a future not too distant from 2023, the story can seem a bit fanciful, but is it really? Believe it or not, law enforcement agencies in North America and other jurisdictions are already using technology to predict. Crime instead of psychic abilities, they’re relying on algorithms to pinpoint potential criminal activity. It’s called predictive policing, and that’s what we’ll be exploring in this episode today. These technologies have the potential to revolutionize law enforcement in ways we never thought possible in real life, but they also come with some very real privacy and ethical risks.
My guess is Dr. Christopher Parsons. He’s a senior technology and policy advisor at my office. Prior to joining the IPC in early 2023, Christopher was a senior research associate at the Citizen Lab based at the University of Toronto’s Munk School of Global Affairs and Public Policy. The Citizen Lab uses an interdisciplinary approach to research, applying practices from political science, law and computer science to produce evidence-based knowledge on digital technologies, human rights, and global security. Christopher, welcome to the show.
Christopher Parsons:
Thank you for having me.
PK:
So Chris, can you start by telling us about yourself and your work and how did you come to focus your research interests on privacy, national security, and public policy issues?
CP:
Well, all of my career really started in philosophy. So my first year degrees are in philosophy. During that time, my principle interest was how exactly are democracies able to both develop and flourish? I really took a look at that through the lens of privacy and impacts that surveillance can have on the ability for people to communicate freely, same time I was doing work in information technology. That sort of led me sort of naturally to pursue this as an interest during my PhD. So there I was trying to understand how certain networking technologies that are adopted by telecommunications providers could be used to surveil users for a number of different purposes. And that in combination with a bunch of political discussions that are happening in this country around new technologies available to law enforcement and security services really propelled me into an interest not just in privacy but national security and more hands-on public policy, what laws are coming up, what regulations emerge.
I think this became much more viscerally real when I came to the Citizen Lab. Within a few months, if not weeks of joining the Citizen Lab, Edward Snowden had been revealing documents of vis-a-vis journalists and the lab and myself got tapped to start analyzing and understanding the way that some of these documents might be revealed publicly or not. That made me very quickly appreciate this wasn’t just an academic area study, although clearly it was, but in fact there was some real work to be done. And so since then for the past 10 years or so, I’ve tried to understand the kinds of activities the governments need to be able to undertake to protect individuals’ lives as well as charter rights and simultaneously ensure that any protocols or policies or laws that have been advanced are really scrutinized to ensure that they both protect our democratic rights and don’t go too far in the security civil liberties balance.
PK:
You’ve spoken about how we’re already living in a kind of sci-fi world when it comes to technological capabilities of law enforcement, somewhat similar to what was portrayed in the movie Minority Report that I just mentioned a couple of minutes ago. Tell us in what ways has policing entered the science fiction world.
CP:
So 30 years ago, if someone wanted to figure out which you had been writing to someone else, you’ve either got to intercept and steam open letters, which is actually legally fraught in this country. Or if you want to tail someone, where are they going in a city, where are they traveling, you’ve got whole teams of officers either in the security services or law enforcement that are going to have to track that person, which means handoffs and cars and planning and hotel stuff. There’s a lot that goes into it. Our diaries were discreet things that you know could access by breaking into someone’s home. Safes were where we held the most important documents. Law enforcement interceptions meant clipping alligator cables to telephone line.
Move to today, law enforcement can undertake many of those activities with far fewer human resources and much more quickly. So we store huge amounts of data in the cloud, encrypted and non-encrypted both. So that means if you’ve got a personal diary, there’s a better than zero chance that is probably sitting on a server somewhere. You might have a blog or a website where you post personal thoughts that once we’re much more private. Smart sensors are proliferating everywhere, which means that there’s a lot of our mobility data that’s available. Smartphones themselves are uploading geolocation information to our photos and other things, so there’s another source of information so we don’t have to have 30 agents figuring out where you’re going. We live in a world today where the most advanced law enforcement actors, the most well-resourced law enforcement actors can do things that really were science fiction just 20 or 30 years ago.
Now, I don’t want to suggest that every single law enforcement agency is equivalently empowered because that’s just not the case obviously. And I’m not also trying to say that there are no challenges facing law enforcement given the proliferation of technology. They do face challenges. But the kinds of difficulties that they have now compared to the challenges they would’ve had 30 years ago are very different. We need to be mindful of that when we’re looking at contemporary and emerging technologies just to respect that we built our criminal laws with the intent of how they would be used 20, 30, 40 years ago and their use is changing dramatically because of technology.
PK:
So the term predictive policing might not be too familiar to some people. Can you explain what that concept means?
CP:
It’s really married with another concept which we touch on first. At a high level, we have what are often called algorithmic technologies, which just means that there’s some sort of process that automates a law enforcement activity or measure. You can imagine a body camera that a law enforcement officer has. When something happens, so loud voices occurs or a firearm is drawn by the officer, any number of different activities, the camera activates. That’s an algorithmic technology, is acting in response to something.
Predictive technology in contrast is one where actions are inferred that will take place in the future. Here’s an example. You can think of some jurisdictions where they have heat maps of expected criminal activity. So based on previous activities that have taken place in geographic region, law enforcement officers or the central body managing those officers may be notify that, “Hey, we may need to distribute more officers in certain locations based on historical prevalence of alleged criminal activity.” And so there, it’s not a response to a specific action, but a prediction and action will take place at some point in the future.
PK:
Another application I’ve heard you speak about before is in the area of bail assessments.
CP:
Yes. And so this is an area where it’s quite high stakes. So here it’s a situation where an individual who is charged with some sort of offense is seeking bail and there may be an algorithmic way of predicting whether or not that individual is more or less likely to violate their bail conditions. Here again, it’s not necessarily drawn on the individual in question, it’s not specific to that person who’s seeking bail, but rather there’s a series of characteristics that are attached to that person that are then fit into an algorithm that predicts should we, shouldn’t we? And if we do, what kinds of conditions? So how much money should have to be put up, what kinds of restrictions on mobility and so forth ought to be associated with that grant or denial of bail?
PK:
There have been documented issues with the accuracy and the use of predictive technologies in those contexts of whether it be heat maps of crime or bail assessments. Can you tell us a little bit more about some of those issues and how if at all they can be mitigated?
CP:
I think Vancouver is actually a great example of how you can deploy some of these predictive technologies and be sensitive to the fact that they could be very problematic if not governed carefully. And so in Vancouver, historically they’ve used a program called GeoDASH. This present an example of heat maps of crime. So within Vancouver, it’s well known that certain areas in the city such as the Downtown Eastside have a high number of officers that are responding to calls on a regular basis. It’s one of the poorest and most disenfranchised areas of urban cores in Canada. And so the system has been designed such that any suggestions that more officers should go to the Downtown Eastside is automatically sort of scrubbed off and they go to the next option instead. And so this is a way of trying to recognize that they can’t just rely on the algorithm. They have other processes in place.
And moreover in the case of GeoDASH, they’re supposed to be regular meetings that take place to confirm that the information that GeoDASH is presenting and the allocation of officers is appropriate and not contributing to issues of over-policing, systemic discrimination, or other harms that could follow from just sort of blindly obeying what the algorithm and the prediction says that the officers should be doing.
PK:
The Citizen Lab where you worked previously published a report in 2020 called To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada. In fact, you supervise the co-authors at the time they wrote the report. Can you tell us a little bit more about the findings in that report?
CP:
So that report was written by Kate Robertson, Cynthia Khoo, and Yolanda Song. They just did a magnificent job so I have to recognize just the heroic work that they did in preparing that report. At a high level, it sought to undertake an assessment of both algorithmic and predictive policing technologies. The goal was to get ahead of where we saw the puck going. We knew that these sorts of technologies were likely to be adopted. And as such, the goal was to do a whole lot of work, try and understand what agencies are using this in Canada, so it’s a Canadian specific report, with what implication and what sort of guardrails might make sense for policy makers to consider building. And so the recommendations which maybe we can get to in a second, we’re really there to prompt people at both the political level also within the legal community and within the justice community to sort of think through what’s coming down the line and avoid having to respond to it and build proactive policies and measures to mitigate the harms while potentially taking advantages of benefits that may exist with the predictive technologies in question.
PK:
It’s an excellent example of the importance of anticipating these emerging risks and acting at the front part of the arc to address or at least identify what some of the downstream issues can be or can arise. So tell us, what did you find were some of the privacy impacts of predictive policing in particular?
CP:
So I think on a privacy front, probably the most important thing is how personal information is collected and used. And especially because when we’re talking about a law enforcement context, there’s a power asymmetry. So individuals may be unable to or feel that they’re unable to say, “No, I don’t want this collected. I don’t want my information used in this way.”
Moreover, we noted repeatedly actually that there was an issue of the repurposing of data. So I’ll give you an example. We know across the country based on public studies and public surveys that some communities experience over policing, especially racialized Black, indigenous communities. And as a result, those individuals in those communities are often arrested. So whenever you build an algorithm that’s learning as an example on past alleged criminal behaviors, well, you’re going to surface bias data just by default. And then if you rely on that bias data to train your models, well it’s going to quite likely absorb some of those biases and then recreate and systematize a whole new way of systemic harms and biases.
So one of the real concerns we were worried about is how data’s collected, when is it collected, under what conditions is it collected, and then how might historical data be reused. One of the highlight recommendations is warning all the actors in the criminal justice system to be very mindful about how data is used for fear that if that doesn’t take place, then the harms that I know that law enforcement agencies, obviously human rights commissioners, privacy commissioners, and society generally is concerned about, those harms could be repeated.
PK:
So the risk of self-perpetuating that very bias that you start with in the first place. You mentioned human rights issues. The paper also laid bare the significant impacts on other human rights more broadly. Can you describe what some of those are? Including discrimination, but some of the other human rights that were canvased in the paper.
CP:
When we were looking at the way that predictive technologies and algorithmic technologies could operate, we did take a broad human rights based analysis. And so privacy is a highlight one because this is what we were very interested in, but human rights obviously extends well beyond just the right to privacy. To give you a couple examples, when we were thinking about issues such as freedom of association, when you do have a predictive technology that is sort of grouping people together, there’s a question of whether or not that may encourage some people to not associate that group if possible.
So what might that mean in practice? It might mean that let’s say law enforcement is directed to go to a certain community with some frequency, right? That’s where your church is or that’s where your union hall is, or frankly it’s just where you like to hang out with your friends, but you may be disincentivized to go there on the basis that you’re worried about the implications of there being a heightened law enforcement presence. There are expression concerns, especially when we talk about predictive policing technologies that rely on the collection of personal information or information that you’re communicating publicly. So quite often there is an erroneous perception amongst some government agencies that whatever you say online is fair game.
And of course we have privacy rights that are meant to constrain government agencies so they can only collect information when it’s necessary and proportionate to their activities. But if you don’t know what’s being collected, if you don’t know what communications are being monitored, then you may be more hesitant to communicate, which is a huge issue in a democracy. You should be able to communicate freely and without overt fear so long as what you’re engaged in isn’t unlawful or contributing to the areas where law enforcement has a legitimate interest in monitoring.
When we talk about issues such as bail or the way that protective policing can move into the justice system, there we run into issues of the rights to be free from arbitrary detention or the right of due process. So when you don’t know what tools are being used to assess whether you’ll be incarcerated or not, to what effect those tools impact on decisions and you don’t even have the way of challenging them, it can really challenge those fundamental values. That’s really important because we often talk about rights in a way that is not directly touching our lived experience, but certainly when you are at a point of being detained by law enforcement, that’s where our rights become very real and very quick. And so it’s important that any new technology doesn’t unduly infringe upon them.
Sort of at a macro level, a high level, one of the big risks is where and if predictive policing technologies or predictive technologies more broadly frankly are being adopted by government or government agencies. When individuals don’t know what’s being collected, they don’t know what the processes are, they really don’t understand decisions undertaken by government agencies or law enforcement agencies or officers, then they may be less willing to just exercise their freedoms. That has a chilling effect on society. It may deter their willingness to engage in the political process. If I say something about this politician or say something about that counselor or something like that, am I going to be arrested or am I going to keep my job? When we don’t have good high levels of transparency and accountability, it can really undermine the ability of citizens to connect with one another and to maintain the vitality or revitalize the key elements of our democracy, such as our ability to express ideas and concepts with one another and associate with one another freely.
PK:
Now the report makes several recommendations to government and to law enforcement and I was wondering if you could tell us what some of the key recommendations are.
CP:
The report has a lot of recommendation, so I would recommend anyone who’s interested to pick up a copy. You can access it from the Citizen Lab’s website. It has seven priority recommendations and 10 ancillary ones. I’ll just touch on a couple of the priority ones.
One of the recommendations was that government put moratoriums on law enforcement agencies use of technology when and where the technology relies on historical data stores. So what that means is if as an example law enforcement agencies were acquiring facial recognition technology, but they had to train it or they were relying on old mugshots or something like that, then there should be a real hesitancy because we know as an example that historical mugshot data is going to be biased because of the individuals who are more likely to who’ve been arrested or detained by law enforcement.
Second, government should make reliability, necessity and proportionality as prerequisites to any algorithmic or policing technology, which to say if you’re talking about a highly invasive technology, then it should be proportionate to the harm. We shouldn’t be using highly invasive technologies to, as an example, predict or identify or stop shoplifters. Third, sort of self obvious I guess, but law enforcement agencies should be fully transparent in the technologies that they’re adopting. Now to be clear, that isn’t to say that the brand and make and model of a technology needs to be made public. There are good reasons to keep some of that private, and I get it as to the authors, but law enforcement should come out and say, “We are using, as an example, facial recognition technologies in this environment for these reasons and here’s how that might be used,” right? So it’s clear to the public what’s going on.
Fourth, we suggested that provincial government should establish procurement rules around law enforcement’s wiring of technologies. So this would mandate things like impact assessments or annual reports and how the technology is used, sources of training data, things of that nature. The role there is to ensure that law enforcement agencies really have a process in place to be honest and there isn’t a situation where well-meaning, I would expect, individual officers go and collect technologies or use technologies in ways that could be injurious to the public insofar as it might be biased or contribute over policing.
And lastly, we strongly recommend that governments and law enforcement agencies alike engage in very extensive consultations and that include the parties who have historically been subject over policing of state discrimination. Part of the extensive consultations and listening has to include not just, “How do we build a better policy around the technology in question, but should we be using this technology? Is this actually the way the public funds should be spent?”
PK:
The IPC has adopted next generation law enforcement as one of four key strategic priority areas to focus our work. And our goal there is to contribute to building public trust in law enforcement by working with relevant partners to develop the necessary guardrails for the adoption of new technologies that protect both public safety and Ontarians’ access and privacy rights. So Chris, what do you think are the most important steps when it comes to developing and deploying next generation law enforcement technologies?
CP:
I think that the first thing that’s overriding is openness and accountability. There should be an open declaration of these are the kinds of tools we want to use and why. And attached to that there needs to be an accountability regime so that when a technology does sort of appear out of nowhere, that someone is responsible for explaining what’s exactly has been going on. Also, and I mentioned this previously, but policing techniques and technologies really need to be responsive to communities and that has to be different from demanding responses from communities, right?
So it’s not enough, it’s not appropriate to say, “Hey, we have 13 cameras that we want to put down in some part of the city or down a rural road for whatever reason. Tell us what you think about us doing that.” It needs to be, “We have identified a challenge in the community. Here’s how we think the challenge manifests. We would like to talk to you about that challenge.” And then in the course of it, say, “Here’s some things that we have considered. What do you think?” So it’s not to say complete, it is a genuine dialogue back and forth. That means there’s a meaningful and deep listening requirement that can sometimes involve a community saying, “No, we don’t see that as a problem,” or ultimately, “No, we disagree with your proposals and here’s what we need instead. And it’s much more challenging.”
I think that maybe just take a slight step to the side. One of the real challenges I think facing law enforcement agencies across the country and in Ontario as well obviously is they’re responsible or have been made responsible for so many challenges that are in excess of their day-to-day capabilities. They’re not the group that can necessarily respond to all of the harms in our community, nor do we expect them to. And so that’s why a lot of community groups will say, “We don’t necessarily need a law enforcement solution right here. We need a school solution or we need some other social capacity building.”
And then the last thing when it comes to next generation technologies is I really think it’s important to build in adversarial testing by independent researchers. What this would entail is they could be academics, they could be private companies, it could be NGOs, but they would really critically see how well the technology operates.
And so to give you sort of a hands-on practical example from past work that I’ve done when I was doing my PhD, we looked based on research on the way that license plate recognition systems. These are the cameras that law enforcement will use to sort of automatically capture and assess hundreds or thousands of license plates an hour. It’s pretty phenomenal to be honest. But we just looked at what the false positive and the false negative rate was. And that mattered because the way these systems and these cameras operate quite often is a cop car is driving down the highway and it gets a whole bunch of ping saying like, “Oh, this is someone without… Isn’t a registered driver” or more serious crimes. But when you have a high false positive rate, you could incorrectly and inaccurately say, “Oh, well Chris was driving here and he isn’t allowed to drive,” and it was actually a misread by the camera.
And so figuring out what those false positive and false negative rates matters because it impacts the ability of law enforcement to do their jobs. This is where it’s important to have this adversarial testing. It’s not because we want to undermine or prevent or stop or block the use of technology per se. It’s really to ensure that the technology that is being adopted by the law enforcement community is fit for purpose and is accurate, both to protect individuals civil rights but also to ensure frankly that law enforcement are using tools that are going to be maximally useful in their own missions and ethics.
The importance is transparency by law enforcement and adopting strong norms of accountability. And why do this? In addition to it being the right thing to do, I think, we want it done ideally so that there becomes a standardized way for the public and wash dog offices such as the Privacy Commissioner’s office to understand what’s going on and not come down on law enforcement agencies and shake or wag our fingers and say, “You’re doing something wrong.” It’s actually to provide that trust and reassurance and when something is problematic, being able to raise it. Currently, there are any number of instances over the past decade where private commissioner’s offices, civilities groups, academics, the public have learned about of something going on months or years too late. And if there have been more accountability, more transparency upfront, those problems might not have arisen and they might have been addressed well before the harms or problems had occurred.
PK:
Chris, just to round out our conversation, I’d like you to look forward a few years down the road, what do you see as the most interesting and maybe challenging issues linked with law enforcement practices and privacy protection in Ontario?
CP:
So I think when we’re just looking at law enforcement, I actually think it’s looking at the international policy and legal discussions that are taking place. I’ll give you a couple examples. Internationally, there have been updates to something called the Cybercrime Convention, which has really expanded on the kinds of information sharing and law enforcement collaboration we might see across international lines in the coming decade. This will radically expand Canadian law enforcement agencies to work with their international partners and vice versa. But there are frankly some concerns about possible human rights impacts and the expansions of powers under which collaboration might take place.
There’s also a little bit close to home, a little bit less macro international, something called the Cloud Act in the United States. And so this enables the United States government to engage in bilateral agreements with other countries. So one-to-one agreements that will massively change the way that electronic information can be shared by law enforcement agencies or collected by law enforcement agencies. Canada is negotiating one of these to the United States right now. Should it manifest similar to what we’ve seen in the United Kingdom and Australia, I mean the Canadian law enforcement agencies would be able to serve warrants on Facebook as an example directly in the United States, which short circuit a lot of the difficulties in filing those right now.
Now, why does that matter? Well, it matters because as we see an increased ability of Canadian law enforcement to go abroad, they will be able to adopt many of the techniques that we see in the United States. So there’s all sorts of things that are very contentious in the US frankly right now. Things such as keyword searches where you go to Google or Microsoft or other search engine company and say, “We’re looking for people who have run these kinds of searches because we think that they might be attached to some criminal activity.” Or geolocation warrants where again you go to either Apple or you go to Google or another company that’s collecting geolocation information, say, “Hey, we want to know the people who were in these locations at these times,” presumably because there is a criminal event or an alleged criminal event that took place during those times.
We’re also going to see the likely increasing use of malware in this country, which is a system whereby Canadian law enforcement agencies serve malware on endpoint devices to collect information for evidentiary purposes. It isn’t the case that every single law enforcement agency in Canada is going to wake up one day and boom, they all have the same powers and same capabilities. There’s going to be a bunch of training. I suspect the larger law enforcement agencies in this province in particular will be able to use these much more quickly than some of the smaller ones. But nonetheless, we’re going to see a whole bunch of new technological possibilities because of these international legal changes that are afoot. I think that’s a really exciting and dynamic space to see what might be coming.
And also, frankly and ideally, work with law enforcement agencies ahead of any of these powers showing up on Canadian doorsteps and say, “Okay, what do you need? Why do you need it? How can these tools be used in a way that is maximally privacy protective, minimizes the human rights impacts while appropriately enabling law enforcement to carry out its lawfully mandated operations?”
PK:
Christopher, thanks for joining us on the show today. It’s been a great conversation and we’re very excited to have you join the IPC. You’ve given us a lot to think about when it comes to predictive policing technologies and the guardrails that need to be in place to protect the rights of citizens.
This is our first episode of season three of the Info Matters Podcast and we’re off to great start. I’m very excited about the privacy and access to information issues we’ll be exploring this season, so stay tuned for more episodes. For listeners who want to learn more about our initiatives and resources in the area of next generation law enforcement and other privacy and access topics, please visit our website at ipc.on.ca. You can also call or email our office for assistance and general information about Ontario’s access and privacy laws. Thanks for joining us for this episode of Info Matters. And until next time.
I’m Patricia Kosseim, Ontario’s information and privacy commissioner, and this has been Info Matters. If you enjoy the podcast, leave us a rating or a review. If there’s an access or privacy topic you’d like us to explore on a future episode, we’d love to hear from you. Send us a tweet @IPCinfoprivacy or email us at @email. Thanks for listening and please join us again for more conversations about people, privacy, and access to information. If it matters to you, it matters to me.