I suspect many of the opponents of the use of facial recognition technology (but perhaps not all of them) are not opposed to the enforcement of criminal codes,but rather very skeptical that governments can be trusted with such tools. Considering the already highly intrusive surveillance state apparatus that exists in the five eyes countries - the thought of adding in facial recognition technology is quit terrifying.
With cameras everywhere this could give you a capacity to track where everyone is at any moment - especially if everyone is installing doorbell cameras etc whose corporate suppliers are likely to bend to rubber stamped warrants or make deals with the NSA to further government surveillance.
There's some of that, but there is also an "enforcing the law is bad" crowd. There was an Atlantic article a few years ago that was aghast at the fact that Ring as were resulting in people being arrested for stealing packages.
That was a long and wild read. I did have a caveat in my original comment, in anticipation that some people might actually hold the view that MY discussed. I have certainly heard about progressive jurisdictions where theft just isn't prosecuted, but the whole concept seems so crazy it's still a bit hard to believe.
Yeah, it's nuts. I understand that there are reasons why police might not go all "Inspector Javert" over a couple stolen packages, and that we may not even want them to do so, but the thesis of this article is very much that it is wrong and, perhaps racist, for people to get upset that someone is repeatedly stealing their Amazon packages.
I suspect that a regime that is going to illegally abuse facial recognition is also not going to be stopped by a law against using facial recognition at all. We might as well have it as a tool for legitimate law enforcement investigation because I doubt that some "ban" on it as a technology is really going to save us from Big Brother.
Right. The slippery slope argument is bogus. If there's any autocracy in America's future, that regime is going to use whatever tools technology makes available. And there won't be any debate.
Those arguing against greater use of these identifying technologies need to point out why rule-of-law constrained governments and police shouldn't use such technologies. There are arguments that can be marshalled, for sure. But "it puts us on the path to Big Brother" doesn't seem like one. What puts us on the path to Big Brother is things like elections nullification, and subversion of democratic norms.
Keeping it illegal does afford greater protections for citizens than not - for example you could claim fruit of the poisonous tree, to get evidence excluded where it was derived from the illegal use of facial recognition mass surveillance. Similarly, it limits options for funding and deploying the technology.
Also: if we rely on face recognition, we should make sure that the judicial system is able to find exculpatory evidence on this basis: if you're accused of a certain crime at a certain time and place, your defensive team should be able to look for evidence of your presence elsewhere (which is not easy as recordings are often deleted for reasons of privacy and storage capacity).
You are perhaps over-estimating how expensive this software is, or at least how expensive it will be in the not too distant future. More than likely, if an evil repressive government wants this technology it's not going to be hard to get it.
Agreed. Let’s not kid ourselves that the database wouldn’t also be full of Trump supporters merely attending a rally or abortion activists marching in protest. This information could easily be used to harass opponents of the ruling political party, justifying searches and detainment just because having a match in the database provides cause.
Either you have a society where the government is empowered to harrass people for their political views or you do not. The Soviet Union was quite the archetypal totalitarian state without a lot of high tech tools. I don’t think today’s China has anything on 1950’s Russia in that regard.
>>Agreed. Let’s not kid ourselves that the database wouldn’t also be full of Trump supporters merely attending a rally or abortion activists marching in protest.<<
I don't think we need to "kid ourselves" this wouldn't be the case. We need to pass laws and formulate regulations preventing this from being the case. If we can successfully do this, why not opt for less crime? Are you claiming we cannot do this? Perhaps access to the database would be provided only upon issuance of a warrant...
There are things that people want to do that are ethical but they don't want the whole world to know about it.
Maybe they are into some consensual but potentially not-well-accepted sexual interests. Maybe they go to AA meetings or a psychologist. Maybe they are involved in a controversial political group. Maybe they are in an insular religious community but are considering leaving. I can think of millions of reasons why someone wouldn't want all of their daily whereabouts monitored.
We don't want to live in a world where the government has so much surveillance information that they can easily harass, intimidate, or blackmail someone that they don't like with this information. So if we are going to increase survaillance for better enforcement of criminal laws, we need assurances that these measures will be only used for investigating crimes, and that authority to access the data is highly limited both in terms of who has access and how the data can be used.
Who is going to look at the results of this data from every camera on every corner in every city and town in the US?
There should be regulations that confine analyzing this data, that doing so without a clear reason is unacceptable (cause for dismissal.) That's the case for hospital employees; you cannot look up a patient's information without a good reason.
My reading was that Matt was advocating for *more* cameras, which I assumed would be government supported. I don't imagine people would watch this footage for casual fun; they would be looking for a criminal. Therefore anyone who does not seem to be a criminal would not be subject to facial recognition technology, which I guess is the basis for people's objections. Not sure if they're worried about being put in some kind of national facial recognition database for future punishment/ oppression, or about giving criminals an unfair disadvantage?
I do think that there's a difference between a world where there are cameras in some public places, with the data only being able to be parsed by a human is different from a world where cameras in public are nearly everywhere, and your face can be automatically recognized and all the data can be automatically processed, analyzed, integrated and stored at a large scale. Perhaps the latter world requires some reanalysis of the pre-existing social contract
And just to add it shows society does a pretty good job ***not*** sharing this info.
You almost never see videos of regular joes blowing their life saving in slot machines for 8 hour shifts even though the videos exist. Nor do you see videos of guys often slurring their words. Occassionally some pure schmuck gets exposed. But by and large we never see this stuff.
This is a good point, but the kicker is less about the government having technology like this and more about the fact that the technology itself is extremely opaque and cops, attorneys, and juries are all going to be equally unable to engage with it productively. Law enforcement is just not equipped to deal with this.
How do we monitor the algorithm? Who gets to pressure test it and make sure that it is doing what it says it's doing? What happens when version 3.4.96 identifies your kid in the grainy screencap of someone getting out of a car, and version 3.4.97 disagrees?
I strongly suspect that juries will just accept "computer says it was you" as an extremely compelling argument, because that's how most people interact with technology.
I work in ML/AI and volunteer with a group that encourages BIPOC young adults to learn about and pursue careers in math/science/technology (I mentor in ML/AI/scientific computing). What you're saying is exactly what someone in the know should be worried about with facial recognition. It's not that good, it's lazily opaque to the average citizen, and it's accuracy diverges significantly by the race, gender, and age of the person being classified. Guess which groups it performs the worst for -> https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
I don't think the main point of the technology is to decide who is who on a video once it gets to court. I think the idea is more "a car was stolen yesterday, we have video showing random dude doing it, we have no idea who he is, lets run the picture through a database to see if a name pops up" at which point you use that as a lead to find them and then use the video itself as evidence and let the jury decide how much the accused looks like the person on the tape. It's like a faster and more certain version of releasing the video to the media in hopes some viewer recognizes them.
Yeah, I think we might disagree here. I think that the scenario you describe sounds plausible, but note the basic circularity of the logic that you are using:
There is a video of someone that committed a crime, and the cops search a large database to find a person that looks like the person in the video. The database is large, so the computer works hard and finds someone (or probably a lot of people) who plausibly look like the person in the video. Then the cops present the video as evidence that the person that they arrested committed the crime, and asks the jury to decide for themselves whether the person on the stand looks like the person in the video. And he probably does, because the computer is pretty good at finding people that look like that. But did the computer pick the right person among all of the people who plausibly look like the video? Is the right person even in the database to begin with? There's not really any way of knowing without secondary evidence.
In a lot of cases the cops might find a lot of secondary evidence when they investigate whomever the computer selects, which is fine. But in a lot of cases they won't, and in cases like that I'll bet there would be a real temptation to use the exact same logic that you did above to get a conviction anyway.
Fundamentally though, how is that a more likely result than a similar misidentification from broadcasting the video on the local news and having someone call in and say "that looks like my co-worker!" The technology is really just another tool to find the suspect and at the end of the day the plausibility of that being the person in question has the same burden of proof as it would using contemporary methods.
Again we might disagree, but I think it's basically because in practice people give a huge amount of deference to technical solutions because the underlying issues are too abstract to reason with.
If a person gives a tip to the cops the defendant's attorney can cross examine the informant and ask a bunch of pertinent questions to clarify their basis for identifying the person in the picture, which amounts to additional evidence supporting or disputing the ID. None of that secondary evaluation of the ID itself is really available in a database search, because an attorney can't ask the convolutional neural network why particular layers were tuned the way that they were.
So put up guardrails and trust that the USA is fundamentally different than China.
The biggest factor to unsolved crime is the "street code". If there's a crime and no one in the neighborhood wants to talk, put up cameras as a last resort. This kind of thing could go a long way to reducing inner city violence.
This is an intuition I hold, too. Laws on the books we're not going to enforce are actively harmful. The Anker paper backs this intuition up from another direction.
This is my hang-up. I agree with most of what Matt's said above, but giving the government the tools to track its citinzenry's every move... Let's just say there's a reason China is the main exporter of this technology/civil model.
I broadly agree , but I think this piece’s main weakness is that it’s written against the backdrop of Twitter crazies who oppose law enforcement. Thus it doesn’t address the concerns of normal people who instead prioritize the well being of the average, law abiding citizen. Us normies are all for catching (and punishing !) criminals , but we have other concerns about expanding police powers. How do you make sure innocent people are not falsely accused? How do you balance these real needs with privacy concerns (e.g. do we allow any and every cop unmonitored access to the most powerful facial recognition tech?). In short, in a healthy democracy we should always take pause before giving the government new powers. Ultimately, as often, we may conclude with a sigh that giving this power is a necessary evil, but we may want to think about adding some checks for reassurance. I wish MY would have addressed these mainstream concerns.
I can't speak for every jurisdiction, but in the ones I am familiar with, courts absolutely are aware of the low reliability of witness identifications, and the rules of evidence have been re-written to address and mitigate these concerns.
I hadn’t heard that, but if it’s right it’s going to lead to even fewer criminals being punished unless we can make up the gap with more reliable tools.
eye witness testimony requires less lawyerly skill to introduce at trial than any other form of evidence. rules against hearsay make it difficult to testify to what a witness learned from others. what they personally saw is fair game unless it is completely unconnected to the charged offense.
one can certainly call an expert witness to impeach identification testimony, but few criminal defendants can afford experts. i have one client who has a very good job and she is struggling to pay for a single expert. she’s also one of my better off clients.
Can you vaguely indicate which jurisdictions these are? I'm an NA lawyer and while we're vaguely aware of the sketch nature of eyewitness evidence at least my jurisdiction still heavily favours it over other forms of evidence.
No, but we do have strict rules about evidence admissibility, perjury laws, the critical role of cross examination etc. centuries and millennia of experience taught us how to use witness testimony correctly and beware its pitfalls. We should aim for no less when incorporating new methods.
IMO the Federal Rules of Evidence and their state analogues are not by any means paragons of the correct application of evidence (inter alia they're at least in part explicitly counter-Bayesian) and reflect the staunch reliance on tradition to substitute for considered and principled reasoning. There's a prominent, and in my view unfortunate, strain of thought that essentially treats Anglo-American evidentiary norms as good by dint of their persistence and pedigree rather than because they bear the slightest relation to how actual humans conduct themselves and make assessments outside the confines of a court of law.
Not everything in them is bad, mind you, but treating them as the considered result of decades and centuries of evolutionary pressure to reach the correct result is a long way from the correct paradigm in my view.
omg yes. on the other hand, the first thing clients want to do is dig up/drag in dirt on their accusers. the instinct is very strong and transcends race, gender and class. blah blah blah uses drugs, had an affair, whatever, so predominates what i hear it can be hard to focus clients on their actual cases.
also, the legal definition of relevance (rule 401 i think) is very very broad. the principle that probative value just outweigh potential for unfair prejudice is 1) obvious and 2) too i’ll defined to be much more than a blank check made out to trial judges
In the one case where I was selected for the jury, I was shocked to see that the police officer who testified was not allowed to read from his notes while testifying, but was allowed to look at his notes to "refresh his memory" and then testify again after doing that. I can see that there's some intuitive idea of how the memory works on which this set of restrictions and permissions might make sense, but not on any realistic idea of how it works!
If you have JSTOR access, I think this paper by Laurence Tribe does a remarkably good job of arguing that there are certain ways the standards of evidence *should* be anti-Bayesian: https://www.jstor.org/stable/1339610
The biggest and most prominent examples are FRE 404(a)(1) (generally referred to as "conduct in conformity" [with the defendant's character]) and FRE 404(b)(1) (disallowing prior bad acts as evidence of a defendant's character). https://www.law.cornell.edu/rules/fre/rule_404
(a) Character Evidence.
(1) Prohibited Uses. Evidence of a person’s character or character trait is not admissible to prove that on a particular occasion the person acted in accordance with the character or trait.
[...]
(b) Other Crimes, Wrongs, or Acts.
(1) Prohibited Uses. Evidence of any other crime, wrong, or act is not admissible to prove a person’s character in order to show that on a particular occasion the person acted in accordance with the character.
Separately, the hearsay rules more or less in their entirety. In brief: the world you and I know and live in and all of private enterprise basically run on hearsay (modulo the fact that technically hearsay only exists outside court of law and thus literally everything outside a court is hearsay), and in courts of law it's presumptively disallowed.
I think this is all kind of addressed, at least implicitly. The thesis is that more effective policing means less crime means fewer accusations at all, which should lead to fewer false accusations. Better technology use should lead to fewer false accusations and/or easier exonerations (as we’ve seen with the expanded use of DNA evidence).
I get the apprehension about giving the police more tools, but I think you need to make a case of what they’re going to use those tools for that’s different and worse from what they already do. I think that’s an easy to make when the new tools are military weapons and vehicles. With surveillance stuff? I’m not sure how much would change given the degree to which we’re already tracked by cameras and internet stuff. What would they use it for that they can’t do already with eyewitnesses, patchwork video availability, etc; and why would it be bad?
IMO it’s addressed only under the implicit premise of a police force that’s 100% professional and in good faith. I’m far from those demonizing the police, but let’s not forget that they’re human, that power corrupts etc and frankly that beyond the theoretical concern evidence isn’t lacking for abuse of current police powers by bad actors on all levels. I think these concern deserved explicit and comprehensive attention in the piece.
But what part of this involves giving the police more power or relies on them being 100% professional and in good faith? They can already arrest whoever they want with lower standards of evidence. They can stop and kill people with relative impunity. The people that they do investigate, charge, and convict end up in prison for a very long time! IMO one of the underlying assumptions here is that the current police aren't actually that good in the US so we need to find a way to make them better. So let them use facial recognition instead of relying on eye-witnesses and racial profiling. Use ankle bracelets and breathalyzers so there's less differential ("racially problematic") enforcement of bail/probation conditions. Speed cameras instead of DWB! (I know that last one is from a different piece than here, but it's in the same spirit).
Assume that the police are... sub-optimal (I certainly do!). But then tell me how these proposed powers (and I'm not sure how much of this I would classify as new powers per se, but ok) would lead to them doing worse things. As I mentioned above, this is easy to do for the military stuff: If you give cops more guns they will shoot more people. If you give them tanks and shit they will act more like an occupying force with any provocation (including riots or whatever, but also even just peaceful protests). Even if you just give them cool tactical gear (helmets, vests, cool belts, etc), it will encourage the cosplay warrior crap which makes them less effective and more dangerous to the public.
Can you make a similar case against using facial recognition and location monitoring? I know that sometimes people use "don't break the law if you don't want to be punished" in bad faith. But I think the point here is that if you have effective crime detection, then you won't get falsely accused of breaking into houses if the GPS shows that you weren't breaking into houses. Or if you don't drink above a certain amount then the cops won't waste time following and harassing you to check up on whether you're complying with your probation/parole conditions. So what is the case for "using these technologies would let the cops do new bad things" instead of doing fewer of the old bad things?
To be fair, if you're just of the opinion that they'll find a way to abuse anything given the chance, I 100% won't argue with that. But I do think that there are levels of degree, and that what's proposed might lead to less police misbehavior than the status quo.
If police officers are too "fucking stupid" to use valuable tools to reduce the crime rate the solution is to recruit smarter people into policing not ban the use of valuable tools.
This doesn't make sense to me as an argument against more non human surveillance. If you don't trust the police, wouldn't you want there to be evidence outside of a detective saying "he did it?" If you a juror, would you feel more confident convicting based on eye witness testimony or on camera/DNA evidence?
Neither, or either . I’d be confident convicting based on the weight of the accumulation of the evidence , the more the better, but mostly the better scrutinized the better. Camera/dna evidence isn’t some objective truth immune from heuristic challenges and manipulation and it requires rigorous methodologies of application just like witness testimony.
In any case all this is a separate issue from the question of abuse. The answer to that is simple. It’s not black or white. You don’t write someone a blank check but you do trust them with a signed check for a specific sum despite that also compromising you to an extent (your routing and account numbers are on there etc). Life requires balance. With government we need to try to get good people there but also put guardrails on them to guide them to do their work properly and not abuse their powers.
I suspect that a good deal of the concern about facial recognition software is rooted in a distrust of the police and how they will abuse it.
Other than senior citizens whose opinions are shaped by television police procedurals, a lot of people (and not just Twitter crazies) believe that police are prone to abusing their authority -- particularly when it comes to interactions with Black people.
Because of that distrust there is probably a stronger than is actually warranted reluctance to give police even more tools that could be abused.
I think the trust/distrust dichotomy isn’t helpful. There are very few angels or devils out there. We need to rather think in terms of cost benefit analysis (as MY does) but then *also* account for potential abuses both within and outside the legal procedure and regulate to minimize them (the issue MY overlooks).
P.S. I strongly recommend reading John McWhorter on the police and black people. Some of the worse things most liberals now take for granted are simply factual untrue. It’s a very sad state of affairs.
"this piece’s main weakness is that it’s written against the backdrop of Twitter crazies who oppose law enforcement."
I don't think the idea remains with just "Twitter crazies." In my undergrad, that ethos infused many anthropology/sociology/ethnology/South Asian studies talks, because the professors in those disciplines regularly worked in non-democratic or authoritarian countries for which the legal system did not accurately reflect the practices of the populace, and greater enforcement means greater hardship. I suspect the principle then diffuses out to people who study the US legal system, without anybody really paying attention to its preconditions for relevance (also relevant: Matt's discussion of trans-Atlantic pollination in last week's mailbag).
But yeah, in a democratic society, the concept doesn't really make sense.
I never meant to imply that Twitter crazies don’t have day jobs, but that’s not how they’re influencing MY and skewing the emphases in his public writing.
I think we need to talk about the elephant in the room: if catching criminals is good, it will involve catching a disproportionate number of black men.
Progressives don't like enforcing crime because it will target black people much more commonly. If we use facial recognition software, or make more extensive use of DNA databases, they will be used much more commonly against black men. Of course, if these tools ultimately reduce mass incarceration while reducing crime, then everyone is better off, including the black men who would ultimately end up going to prison. But progressives will fight these measure tooth and nail. Expect articles in Vox about racially biased facial recognition, for example.
Liberals would be better off talking about this stuff, just as conservatives would be better off talking about a coherent health care policy.
There’s no elephant, and this fact was always well known. The other side of the coin is that black communities will disproportionately benefit because they are disproportionately victimized by crime. Such policy would thus be neither racist nor “affirmative action” but simply the state doing its job to protect its citizens. The problem with disproportionate black crime has root causes that need to be addressed as such but the idea that giving black criminals a pass on racial grounds is a legitimate or helpful measure let alone some “anti racist” necessity is pure insanity that thus never occurred to anyone before the last decade, and it’s still totally unclear to me how this toxic nonsense ever came to mainstream attention.
I think you underestimate the elephant content of the room. I've encountered progressives in real life, not just on Twitter, who are genuinely willing to take the position that wealthy white people commit *violent felonies* at least at the same rate, or possibly even higher rates(!), than poor black people, and that the reason that's completely unreflected in crime statistics is that police and/or prosecutors cover up this wave of white criminality, if it ever gets reported in the first place (which they believe in many cases it isn't).
Someone made this argument to me on Reddit the other day! They added the additional theory that rich white neighborhoods were intentionally under-policed for this exact reason.
Yep, I've encountered that variation in real life too. It typically comes up when someone is explaining how it's racist that there's so much more police activity in [POOR NEIGHBORHOOD] than [RICH NEIGHBORHOOD] and I respond by pointing out that [POOR NEIGHBORHOOD] has dramatically higher violent crime rates, to which the other person replies that [RICH NEIGHBORHOOD] is really just as violent, but the police stay away from it because it's full of rich white people. If I proceed further and question why rich white people would tolerate all that crime in their neighborhood, I usually get a response that sounds like some variation of "Get Out"/"The Purge" where the rich white people are supposedly beating/raping/torturing/murdering non-white workers or random passers-through.
My experience of the "black community" indicates that today it's still very much the same community which backed Clinton's '94 crime bill to the hilt.
The hipster lefties just want to pretend that the 12 black humanities Ph.D grads that they follow on Twitter are "representative" of African Americans, of whom under a quarter have a bachelor's or graduate degree, and have erected a circular firing squad and peer pressure architecture around that stupid shibboleth.
I've always thought that there's a niche waiting to be filled in The Discourse for a Black commentator pushing back against those 12 Black humanities Ph.D grads. There's probably already someone out there that I'm just not aware of. Someone like Ruben Gallego telling people to cut it out with the Latinx crap.
Roland Fryer, Thomas Chatterton Williams, Kmele Foster, Glenn Loury, Coleman Hughes, even Adolph Reed. There’s no shortage, but our gatekeepers on the liberal side seem to have decided that they’re all suspect somehow.
No ethnic or racial grouping within the US is a monolith.
But I regard it as a particularly stupid exercise for the woke/hipster left to anoint a very small, skewed sample as representative of such a "community."
“They are disproportionately victimized by crime” really is something that we don’t seem to talk about like we should, and I find it heartbreaking. The vast majority of people in zip codes where bullets fly frequently are law abiding citizens. Think about the way we talk about school shootings, and think about the way we *don’t* talk about being a person trying to raise a family in those neighborhoods. It’s horrifying. School shootings terrify us and dominate conversation even though they’re statistically rare. There are places in this country where you really do need to worry about shootings, not a statistical rarity at all, and we (those of us who don’t and won’t live in those places) just sort of take it for granted.
The people in that situation have less representation in our national discussion about crime than the criminals do. If white people with money had to live that way, this country would already be fascist because if it took a police state to stop it, we’d have a police state.
It’s like we’re so afraid that some conservative will bring up “black on black crime” that we make little or no effort at helping the people who are being terrorized. Again: the vast majority of people who live in our nation’s scariest zip codes are 1) black and 2) law abiding citizens. How is this not one of the most pressing concerns of people who care about equity?
Also, given that perpetrators of black on black crime are less likely to be caught than perpetrators of black on white crime, and given that most murders are intra racial, more enforcement will mean more black people going to prison
If punishment is like lightning striking, if the randomness of punishment blunts deterrence, and if a lot of black people are being struck by lightning, a sensible anti-racist might want lightning strikes to be gentler.
“Like lightning striking” presumes both random and rare. If something happens all the time, it’s no longer “random” especially when it’s happening to people who are committing the same kind of action.
Right now there are far more white people getting away with crimes than black people (see my comment on another thread about drunk driving), so it’s much more likely that catching more criminals will probably result in the cohort of caught criminals having a racial makeup that is closer to that of the rest of the country.
I think the important thing to note is that basic moral decency requires us not to give half a fuck whether this is true or not.
Enforce the law, protect the innocent, and send the guilty to prison, and if we do it well enough we will probably find that there are fewer guilty people and we can probably imprison them for shorter terms.
I suspect many of the opponents of the use of facial recognition technology (but perhaps not all of them) are not opposed to the enforcement of criminal codes,but rather very skeptical that governments can be trusted with such tools. Considering the already highly intrusive surveillance state apparatus that exists in the five eyes countries - the thought of adding in facial recognition technology is quit terrifying.
With cameras everywhere this could give you a capacity to track where everyone is at any moment - especially if everyone is installing doorbell cameras etc whose corporate suppliers are likely to bend to rubber stamped warrants or make deals with the NSA to further government surveillance.
There's some of that, but there is also an "enforcing the law is bad" crowd. There was an Atlantic article a few years ago that was aghast at the fact that Ring as were resulting in people being arrested for stealing packages.
https://www.theatlantic.com/technology/archive/2019/11/stealing-amazon-packages-age-nextdoor/598156/
That was a long and wild read. I did have a caveat in my original comment, in anticipation that some people might actually hold the view that MY discussed. I have certainly heard about progressive jurisdictions where theft just isn't prosecuted, but the whole concept seems so crazy it's still a bit hard to believe.
Yeah, it's nuts. I understand that there are reasons why police might not go all "Inspector Javert" over a couple stolen packages, and that we may not even want them to do so, but the thesis of this article is very much that it is wrong and, perhaps racist, for people to get upset that someone is repeatedly stealing their Amazon packages.
It was certainly a unique experience to witness a writer try to spin the 27th time someone’s engaging in theft.
I suspect that a regime that is going to illegally abuse facial recognition is also not going to be stopped by a law against using facial recognition at all. We might as well have it as a tool for legitimate law enforcement investigation because I doubt that some "ban" on it as a technology is really going to save us from Big Brother.
If we ban face recognition, then only criminals will recognize faces.
Right. The slippery slope argument is bogus. If there's any autocracy in America's future, that regime is going to use whatever tools technology makes available. And there won't be any debate.
Those arguing against greater use of these identifying technologies need to point out why rule-of-law constrained governments and police shouldn't use such technologies. There are arguments that can be marshalled, for sure. But "it puts us on the path to Big Brother" doesn't seem like one. What puts us on the path to Big Brother is things like elections nullification, and subversion of democratic norms.
Keeping it illegal does afford greater protections for citizens than not - for example you could claim fruit of the poisonous tree, to get evidence excluded where it was derived from the illegal use of facial recognition mass surveillance. Similarly, it limits options for funding and deploying the technology.
Couldn't one legislate what face recognition can be used for and what not?
Then it is clear in advance what is legitimate use and what would be fruit of the poisonous tree.
(thanks, UK, you made me look this up and I wrongly thought it was a "poisoned tree")
Also: if we rely on face recognition, we should make sure that the judicial system is able to find exculpatory evidence on this basis: if you're accused of a certain crime at a certain time and place, your defensive team should be able to look for evidence of your presence elsewhere (which is not easy as recordings are often deleted for reasons of privacy and storage capacity).
You are perhaps over-estimating how expensive this software is, or at least how expensive it will be in the not too distant future. More than likely, if an evil repressive government wants this technology it's not going to be hard to get it.
Agreed. Let’s not kid ourselves that the database wouldn’t also be full of Trump supporters merely attending a rally or abortion activists marching in protest. This information could easily be used to harass opponents of the ruling political party, justifying searches and detainment just because having a match in the database provides cause.
Either you have a society where the government is empowered to harrass people for their political views or you do not. The Soviet Union was quite the archetypal totalitarian state without a lot of high tech tools. I don’t think today’s China has anything on 1950’s Russia in that regard.
>>Agreed. Let’s not kid ourselves that the database wouldn’t also be full of Trump supporters merely attending a rally or abortion activists marching in protest.<<
I don't think we need to "kid ourselves" this wouldn't be the case. We need to pass laws and formulate regulations preventing this from being the case. If we can successfully do this, why not opt for less crime? Are you claiming we cannot do this? Perhaps access to the database would be provided only upon issuance of a warrant...
I am claiming we can not do this.
https://en.m.wikipedia.org/wiki/Room_641A
There are things that people want to do that are ethical but they don't want the whole world to know about it.
Maybe they are into some consensual but potentially not-well-accepted sexual interests. Maybe they go to AA meetings or a psychologist. Maybe they are involved in a controversial political group. Maybe they are in an insular religious community but are considering leaving. I can think of millions of reasons why someone wouldn't want all of their daily whereabouts monitored.
We don't want to live in a world where the government has so much surveillance information that they can easily harass, intimidate, or blackmail someone that they don't like with this information. So if we are going to increase survaillance for better enforcement of criminal laws, we need assurances that these measures will be only used for investigating crimes, and that authority to access the data is highly limited both in terms of who has access and how the data can be used.
Who is going to look at the results of this data from every camera on every corner in every city and town in the US?
There should be regulations that confine analyzing this data, that doing so without a clear reason is unacceptable (cause for dismissal.) That's the case for hospital employees; you cannot look up a patient's information without a good reason.
My reading was that Matt was advocating for *more* cameras, which I assumed would be government supported. I don't imagine people would watch this footage for casual fun; they would be looking for a criminal. Therefore anyone who does not seem to be a criminal would not be subject to facial recognition technology, which I guess is the basis for people's objections. Not sure if they're worried about being put in some kind of national facial recognition database for future punishment/ oppression, or about giving criminals an unfair disadvantage?
I do think that there's a difference between a world where there are cameras in some public places, with the data only being able to be parsed by a human is different from a world where cameras in public are nearly everywhere, and your face can be automatically recognized and all the data can be automatically processed, analyzed, integrated and stored at a large scale. Perhaps the latter world requires some reanalysis of the pre-existing social contract
Yet the dream of most Twitterers is to have a post “go viral” so everyone sees it…
And just to add it shows society does a pretty good job ***not*** sharing this info.
You almost never see videos of regular joes blowing their life saving in slot machines for 8 hour shifts even though the videos exist. Nor do you see videos of guys often slurring their words. Occassionally some pure schmuck gets exposed. But by and large we never see this stuff.
This is a good point, but the kicker is less about the government having technology like this and more about the fact that the technology itself is extremely opaque and cops, attorneys, and juries are all going to be equally unable to engage with it productively. Law enforcement is just not equipped to deal with this.
How do we monitor the algorithm? Who gets to pressure test it and make sure that it is doing what it says it's doing? What happens when version 3.4.96 identifies your kid in the grainy screencap of someone getting out of a car, and version 3.4.97 disagrees?
I strongly suspect that juries will just accept "computer says it was you" as an extremely compelling argument, because that's how most people interact with technology.
I work in ML/AI and volunteer with a group that encourages BIPOC young adults to learn about and pursue careers in math/science/technology (I mentor in ML/AI/scientific computing). What you're saying is exactly what someone in the know should be worried about with facial recognition. It's not that good, it's lazily opaque to the average citizen, and it's accuracy diverges significantly by the race, gender, and age of the person being classified. Guess which groups it performs the worst for -> https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
I don't think the main point of the technology is to decide who is who on a video once it gets to court. I think the idea is more "a car was stolen yesterday, we have video showing random dude doing it, we have no idea who he is, lets run the picture through a database to see if a name pops up" at which point you use that as a lead to find them and then use the video itself as evidence and let the jury decide how much the accused looks like the person on the tape. It's like a faster and more certain version of releasing the video to the media in hopes some viewer recognizes them.
Yeah, I think we might disagree here. I think that the scenario you describe sounds plausible, but note the basic circularity of the logic that you are using:
There is a video of someone that committed a crime, and the cops search a large database to find a person that looks like the person in the video. The database is large, so the computer works hard and finds someone (or probably a lot of people) who plausibly look like the person in the video. Then the cops present the video as evidence that the person that they arrested committed the crime, and asks the jury to decide for themselves whether the person on the stand looks like the person in the video. And he probably does, because the computer is pretty good at finding people that look like that. But did the computer pick the right person among all of the people who plausibly look like the video? Is the right person even in the database to begin with? There's not really any way of knowing without secondary evidence.
In a lot of cases the cops might find a lot of secondary evidence when they investigate whomever the computer selects, which is fine. But in a lot of cases they won't, and in cases like that I'll bet there would be a real temptation to use the exact same logic that you did above to get a conviction anyway.
Fundamentally though, how is that a more likely result than a similar misidentification from broadcasting the video on the local news and having someone call in and say "that looks like my co-worker!" The technology is really just another tool to find the suspect and at the end of the day the plausibility of that being the person in question has the same burden of proof as it would using contemporary methods.
Again we might disagree, but I think it's basically because in practice people give a huge amount of deference to technical solutions because the underlying issues are too abstract to reason with.
If a person gives a tip to the cops the defendant's attorney can cross examine the informant and ask a bunch of pertinent questions to clarify their basis for identifying the person in the picture, which amounts to additional evidence supporting or disputing the ID. None of that secondary evaluation of the ID itself is really available in a database search, because an attorney can't ask the convolutional neural network why particular layers were tuned the way that they were.
So put up guardrails and trust that the USA is fundamentally different than China.
The biggest factor to unsolved crime is the "street code". If there's a crime and no one in the neighborhood wants to talk, put up cameras as a last resort. This kind of thing could go a long way to reducing inner city violence.
So which guardrails...isn't that the question?
And I don't make that assumption--not by trust. We are only different than China if we continually work to make it so.
If the govt abuses the technology then vote em out.
This is error correction and why liberal democracy can be trusted to experiment with technology while dictatorships cannot.
No, rights, like privacy, should not be up for majoritarian rule.
A law that only allows previous felons to be tracked.
Many such guardrail possibilities.
Laws that prevent broad-based facial tracking. Only allow facial recognition to be employed when there's a crime suspected.
This is an intuition I hold, too. Laws on the books we're not going to enforce are actively harmful. The Anker paper backs this intuition up from another direction.
This is my hang-up. I agree with most of what Matt's said above, but giving the government the tools to track its citinzenry's every move... Let's just say there's a reason China is the main exporter of this technology/civil model.
I broadly agree , but I think this piece’s main weakness is that it’s written against the backdrop of Twitter crazies who oppose law enforcement. Thus it doesn’t address the concerns of normal people who instead prioritize the well being of the average, law abiding citizen. Us normies are all for catching (and punishing !) criminals , but we have other concerns about expanding police powers. How do you make sure innocent people are not falsely accused? How do you balance these real needs with privacy concerns (e.g. do we allow any and every cop unmonitored access to the most powerful facial recognition tech?). In short, in a healthy democracy we should always take pause before giving the government new powers. Ultimately, as often, we may conclude with a sigh that giving this power is a necessary evil, but we may want to think about adding some checks for reassurance. I wish MY would have addressed these mainstream concerns.
For exactly the same reason, we should ban the use of eyewitness testimony. It’s the source of a huge number of false accusations every year.
I can't speak for every jurisdiction, but in the ones I am familiar with, courts absolutely are aware of the low reliability of witness identifications, and the rules of evidence have been re-written to address and mitigate these concerns.
I hadn’t heard that, but if it’s right it’s going to lead to even fewer criminals being punished unless we can make up the gap with more reliable tools.
eye witness testimony requires less lawyerly skill to introduce at trial than any other form of evidence. rules against hearsay make it difficult to testify to what a witness learned from others. what they personally saw is fair game unless it is completely unconnected to the charged offense.
one can certainly call an expert witness to impeach identification testimony, but few criminal defendants can afford experts. i have one client who has a very good job and she is struggling to pay for a single expert. she’s also one of my better off clients.
This might be the case where you live but is not reflective of other jurisdictions.
georgia has adopted the federal rules of evidence with only token amendments. we are quite representative
Can you vaguely indicate which jurisdictions these are? I'm an NA lawyer and while we're vaguely aware of the sketch nature of eyewitness evidence at least my jurisdiction still heavily favours it over other forms of evidence.
No, but we do have strict rules about evidence admissibility, perjury laws, the critical role of cross examination etc. centuries and millennia of experience taught us how to use witness testimony correctly and beware its pitfalls. We should aim for no less when incorporating new methods.
IMO the Federal Rules of Evidence and their state analogues are not by any means paragons of the correct application of evidence (inter alia they're at least in part explicitly counter-Bayesian) and reflect the staunch reliance on tradition to substitute for considered and principled reasoning. There's a prominent, and in my view unfortunate, strain of thought that essentially treats Anglo-American evidentiary norms as good by dint of their persistence and pedigree rather than because they bear the slightest relation to how actual humans conduct themselves and make assessments outside the confines of a court of law.
Not everything in them is bad, mind you, but treating them as the considered result of decades and centuries of evolutionary pressure to reach the correct result is a long way from the correct paradigm in my view.
omg yes. on the other hand, the first thing clients want to do is dig up/drag in dirt on their accusers. the instinct is very strong and transcends race, gender and class. blah blah blah uses drugs, had an affair, whatever, so predominates what i hear it can be hard to focus clients on their actual cases.
also, the legal definition of relevance (rule 401 i think) is very very broad. the principle that probative value just outweigh potential for unfair prejudice is 1) obvious and 2) too i’ll defined to be much more than a blank check made out to trial judges
In the one case where I was selected for the jury, I was shocked to see that the police officer who testified was not allowed to read from his notes while testifying, but was allowed to look at his notes to "refresh his memory" and then testify again after doing that. I can see that there's some intuitive idea of how the memory works on which this set of restrictions and permissions might make sense, but not on any realistic idea of how it works!
The one jury I've been on, two cops testified in identical words. I was elected jury foreman, and we acquitted in five minutes.
Can you say more about the counter-Bayesian part? I feel like I almost get it but want to be sure I’m understanding you right.
If you have JSTOR access, I think this paper by Laurence Tribe does a remarkably good job of arguing that there are certain ways the standards of evidence *should* be anti-Bayesian: https://www.jstor.org/stable/1339610
And I say this as quite a committed Bayesian!
The biggest and most prominent examples are FRE 404(a)(1) (generally referred to as "conduct in conformity" [with the defendant's character]) and FRE 404(b)(1) (disallowing prior bad acts as evidence of a defendant's character). https://www.law.cornell.edu/rules/fre/rule_404
(a) Character Evidence.
(1) Prohibited Uses. Evidence of a person’s character or character trait is not admissible to prove that on a particular occasion the person acted in accordance with the character or trait.
[...]
(b) Other Crimes, Wrongs, or Acts.
(1) Prohibited Uses. Evidence of any other crime, wrong, or act is not admissible to prove a person’s character in order to show that on a particular occasion the person acted in accordance with the character.
FRE 404(b)(1) is such a shitshow of a rule that it was eventually abrogated in part as to sexual offenses (FRE 413, https://www.law.cornell.edu/rules/fre/rule_413)
Separately, the hearsay rules more or less in their entirety. In brief: the world you and I know and live in and all of private enterprise basically run on hearsay (modulo the fact that technically hearsay only exists outside court of law and thus literally everything outside a court is hearsay), and in courts of law it's presumptively disallowed.
I think this is all kind of addressed, at least implicitly. The thesis is that more effective policing means less crime means fewer accusations at all, which should lead to fewer false accusations. Better technology use should lead to fewer false accusations and/or easier exonerations (as we’ve seen with the expanded use of DNA evidence).
I get the apprehension about giving the police more tools, but I think you need to make a case of what they’re going to use those tools for that’s different and worse from what they already do. I think that’s an easy to make when the new tools are military weapons and vehicles. With surveillance stuff? I’m not sure how much would change given the degree to which we’re already tracked by cameras and internet stuff. What would they use it for that they can’t do already with eyewitnesses, patchwork video availability, etc; and why would it be bad?
IMO it’s addressed only under the implicit premise of a police force that’s 100% professional and in good faith. I’m far from those demonizing the police, but let’s not forget that they’re human, that power corrupts etc and frankly that beyond the theoretical concern evidence isn’t lacking for abuse of current police powers by bad actors on all levels. I think these concern deserved explicit and comprehensive attention in the piece.
But what part of this involves giving the police more power or relies on them being 100% professional and in good faith? They can already arrest whoever they want with lower standards of evidence. They can stop and kill people with relative impunity. The people that they do investigate, charge, and convict end up in prison for a very long time! IMO one of the underlying assumptions here is that the current police aren't actually that good in the US so we need to find a way to make them better. So let them use facial recognition instead of relying on eye-witnesses and racial profiling. Use ankle bracelets and breathalyzers so there's less differential ("racially problematic") enforcement of bail/probation conditions. Speed cameras instead of DWB! (I know that last one is from a different piece than here, but it's in the same spirit).
Assume that the police are... sub-optimal (I certainly do!). But then tell me how these proposed powers (and I'm not sure how much of this I would classify as new powers per se, but ok) would lead to them doing worse things. As I mentioned above, this is easy to do for the military stuff: If you give cops more guns they will shoot more people. If you give them tanks and shit they will act more like an occupying force with any provocation (including riots or whatever, but also even just peaceful protests). Even if you just give them cool tactical gear (helmets, vests, cool belts, etc), it will encourage the cosplay warrior crap which makes them less effective and more dangerous to the public.
Can you make a similar case against using facial recognition and location monitoring? I know that sometimes people use "don't break the law if you don't want to be punished" in bad faith. But I think the point here is that if you have effective crime detection, then you won't get falsely accused of breaking into houses if the GPS shows that you weren't breaking into houses. Or if you don't drink above a certain amount then the cops won't waste time following and harassing you to check up on whether you're complying with your probation/parole conditions. So what is the case for "using these technologies would let the cops do new bad things" instead of doing fewer of the old bad things?
To be fair, if you're just of the opinion that they'll find a way to abuse anything given the chance, I 100% won't argue with that. But I do think that there are levels of degree, and that what's proposed might lead to less police misbehavior than the status quo.
If police officers are too "fucking stupid" to use valuable tools to reduce the crime rate the solution is to recruit smarter people into policing not ban the use of valuable tools.
This doesn't make sense to me as an argument against more non human surveillance. If you don't trust the police, wouldn't you want there to be evidence outside of a detective saying "he did it?" If you a juror, would you feel more confident convicting based on eye witness testimony or on camera/DNA evidence?
Neither, or either . I’d be confident convicting based on the weight of the accumulation of the evidence , the more the better, but mostly the better scrutinized the better. Camera/dna evidence isn’t some objective truth immune from heuristic challenges and manipulation and it requires rigorous methodologies of application just like witness testimony.
In any case all this is a separate issue from the question of abuse. The answer to that is simple. It’s not black or white. You don’t write someone a blank check but you do trust them with a signed check for a specific sum despite that also compromising you to an extent (your routing and account numbers are on there etc). Life requires balance. With government we need to try to get good people there but also put guardrails on them to guide them to do their work properly and not abuse their powers.
I suspect that a good deal of the concern about facial recognition software is rooted in a distrust of the police and how they will abuse it.
Other than senior citizens whose opinions are shaped by television police procedurals, a lot of people (and not just Twitter crazies) believe that police are prone to abusing their authority -- particularly when it comes to interactions with Black people.
Because of that distrust there is probably a stronger than is actually warranted reluctance to give police even more tools that could be abused.
I think the trust/distrust dichotomy isn’t helpful. There are very few angels or devils out there. We need to rather think in terms of cost benefit analysis (as MY does) but then *also* account for potential abuses both within and outside the legal procedure and regulate to minimize them (the issue MY overlooks).
P.S. I strongly recommend reading John McWhorter on the police and black people. Some of the worse things most liberals now take for granted are simply factual untrue. It’s a very sad state of affairs.
"this piece’s main weakness is that it’s written against the backdrop of Twitter crazies who oppose law enforcement."
I don't think the idea remains with just "Twitter crazies." In my undergrad, that ethos infused many anthropology/sociology/ethnology/South Asian studies talks, because the professors in those disciplines regularly worked in non-democratic or authoritarian countries for which the legal system did not accurately reflect the practices of the populace, and greater enforcement means greater hardship. I suspect the principle then diffuses out to people who study the US legal system, without anybody really paying attention to its preconditions for relevance (also relevant: Matt's discussion of trans-Atlantic pollination in last week's mailbag).
But yeah, in a democratic society, the concept doesn't really make sense.
I never meant to imply that Twitter crazies don’t have day jobs, but that’s not how they’re influencing MY and skewing the emphases in his public writing.
I think we need to talk about the elephant in the room: if catching criminals is good, it will involve catching a disproportionate number of black men.
Progressives don't like enforcing crime because it will target black people much more commonly. If we use facial recognition software, or make more extensive use of DNA databases, they will be used much more commonly against black men. Of course, if these tools ultimately reduce mass incarceration while reducing crime, then everyone is better off, including the black men who would ultimately end up going to prison. But progressives will fight these measure tooth and nail. Expect articles in Vox about racially biased facial recognition, for example.
Liberals would be better off talking about this stuff, just as conservatives would be better off talking about a coherent health care policy.
There’s no elephant, and this fact was always well known. The other side of the coin is that black communities will disproportionately benefit because they are disproportionately victimized by crime. Such policy would thus be neither racist nor “affirmative action” but simply the state doing its job to protect its citizens. The problem with disproportionate black crime has root causes that need to be addressed as such but the idea that giving black criminals a pass on racial grounds is a legitimate or helpful measure let alone some “anti racist” necessity is pure insanity that thus never occurred to anyone before the last decade, and it’s still totally unclear to me how this toxic nonsense ever came to mainstream attention.
I think you underestimate the elephant content of the room. I've encountered progressives in real life, not just on Twitter, who are genuinely willing to take the position that wealthy white people commit *violent felonies* at least at the same rate, or possibly even higher rates(!), than poor black people, and that the reason that's completely unreflected in crime statistics is that police and/or prosecutors cover up this wave of white criminality, if it ever gets reported in the first place (which they believe in many cases it isn't).
This is especially popular among the conspiracy theorist left.
Someone made this argument to me on Reddit the other day! They added the additional theory that rich white neighborhoods were intentionally under-policed for this exact reason.
Yep, I've encountered that variation in real life too. It typically comes up when someone is explaining how it's racist that there's so much more police activity in [POOR NEIGHBORHOOD] than [RICH NEIGHBORHOOD] and I respond by pointing out that [POOR NEIGHBORHOOD] has dramatically higher violent crime rates, to which the other person replies that [RICH NEIGHBORHOOD] is really just as violent, but the police stay away from it because it's full of rich white people. If I proceed further and question why rich white people would tolerate all that crime in their neighborhood, I usually get a response that sounds like some variation of "Get Out"/"The Purge" where the rich white people are supposedly beating/raping/torturing/murdering non-white workers or random passers-through.
My experience of the "black community" indicates that today it's still very much the same community which backed Clinton's '94 crime bill to the hilt.
The hipster lefties just want to pretend that the 12 black humanities Ph.D grads that they follow on Twitter are "representative" of African Americans, of whom under a quarter have a bachelor's or graduate degree, and have erected a circular firing squad and peer pressure architecture around that stupid shibboleth.
I've always thought that there's a niche waiting to be filled in The Discourse for a Black commentator pushing back against those 12 Black humanities Ph.D grads. There's probably already someone out there that I'm just not aware of. Someone like Ruben Gallego telling people to cut it out with the Latinx crap.
John McWhorter
Roland Fryer, Thomas Chatterton Williams, Kmele Foster, Glenn Loury, Coleman Hughes, even Adolph Reed. There’s no shortage, but our gatekeepers on the liberal side seem to have decided that they’re all suspect somehow.
Hence the quotation marks, yep.
No ethnic or racial grouping within the US is a monolith.
But I regard it as a particularly stupid exercise for the woke/hipster left to anoint a very small, skewed sample as representative of such a "community."
“They are disproportionately victimized by crime” really is something that we don’t seem to talk about like we should, and I find it heartbreaking. The vast majority of people in zip codes where bullets fly frequently are law abiding citizens. Think about the way we talk about school shootings, and think about the way we *don’t* talk about being a person trying to raise a family in those neighborhoods. It’s horrifying. School shootings terrify us and dominate conversation even though they’re statistically rare. There are places in this country where you really do need to worry about shootings, not a statistical rarity at all, and we (those of us who don’t and won’t live in those places) just sort of take it for granted.
The people in that situation have less representation in our national discussion about crime than the criminals do. If white people with money had to live that way, this country would already be fascist because if it took a police state to stop it, we’d have a police state.
It’s like we’re so afraid that some conservative will bring up “black on black crime” that we make little or no effort at helping the people who are being terrorized. Again: the vast majority of people who live in our nation’s scariest zip codes are 1) black and 2) law abiding citizens. How is this not one of the most pressing concerns of people who care about equity?
Because they don’t actually care about black lives but about virtue signaling and policing their fellow liberal whites?
Also, given that perpetrators of black on black crime are less likely to be caught than perpetrators of black on white crime, and given that most murders are intra racial, more enforcement will mean more black people going to prison
If punishment is like lightning striking, if the randomness of punishment blunts deterrence, and if a lot of black people are being struck by lightning, a sensible anti-racist might want lightning strikes to be gentler.
What, you want black-on-black crime to be punished less severely?
Do you not value Black Lives?
(inevitable insane-prog follow-up)
You are perhaps facetious but I would make that exact response , word for word (sans the parenthetical) in earnest.
I literally made that exact response on the traffic camera article a while back.
Yeah. It's a facetious/bad-faith interpretation of what was said above.
But it actually would be a legit response to the suggestion in some ways.
It would also just be a 180 from their standard response, but I could seem them being goaded into it, regardless of the I consistency.
I love it when the conservatives all get together and take turns straw-manning the opposition and patting each other on the back.
“Like lightning striking” presumes both random and rare. If something happens all the time, it’s no longer “random” especially when it’s happening to people who are committing the same kind of action.
Right now there are far more white people getting away with crimes than black people (see my comment on another thread about drunk driving), so it’s much more likely that catching more criminals will probably result in the cohort of caught criminals having a racial makeup that is closer to that of the rest of the country.
I think the important thing to note is that basic moral decency requires us not to give half a fuck whether this is true or not.
Enforce the law, protect the innocent, and send the guilty to prison, and if we do it well enough we will probably find that there are fewer guilty people and we can probably imprison them for shorter terms.
Shouldn’t the racial make up of criminals caught resemble that of criminals period ?
It’ll be closer to that too.