[*] [-] [-] [x] [A+] [a-]  
[l] at 11/28/22 8:42pm
When cops decide theyve found the right perp, very little can persuade them to look elsewhere. This tunnel vision has the tendency to take years of freedom away from innocent people. And it would be terrible enough if officers simply refused to consider exonerative evidence. But in this case (like far too many others), the investigators went beyond simply ignoring other evidence to falsifying the evidence they had to ensure the person they picked out for the job ended up in jail. Hillel Aron of Courthouse News Service has the background on this decision [PDF] handed down by the Tenth Circuit Court of Appeals. In 1999, Floyd Bledsoe, a 23-year-old farmhand, was living in Jefferson County, Kansas, with his wife Heidi, their two young sons and Heidis 14-year-old sister Camille Arfmann. Bledsoes 25-year-old brother, Tom, lived close by. Tom was partially deaf and had certain intellectual limitations, according to the lawsuit Floyd Bledsoe would later file, as well as a history of troubling sexual behavior that included pursuing young girls. On Nov. 5, 1999, Camille went missing. Two days later, according to Bledsoes lawsuit, Tom told both his Sunday school teacher and his parents that he had killed her. Toms parents hired an attorney, Michael Hayes, who took Tom to the Jefferson County Sheriffs Department that same day. Tom told investigators how he killed Camille and where her body could be found. Hayes turned over the murder weapon — a recently purchased 9 mm handgun. Tom was arrested and charged with homicide. But Tom would soon change his story, recanting his confession and accusing his brother of the murder. That led investigators to go after Floyd Bledsoe. And once they were focused on Floyd, they forgot all about Tom. Not only did they refuse to consider his recanting might be a lie, they falsified evidence to ensure the charges against Floyd stuck. Heres how it started: Shortly before Tom’s staged recantation,” Tom’s defense attorney “Hayes sought [Bledsoe] out and told him that Hayes was taking Tom off the ‘hot seat’ and putting [Bledsoe] on, or words to that effect.” On November 12, a Kansas Bureau of Investigation (“KBI”) officer, Defendant Johnson, administered lie detector tests to both Tom and Bledsoe. During his exam, Tom recanted his confession and incriminated Bledsoe. But Tom “failed the question” of whether he shot Camille, and was so overcome with guilt immediately after the lie detector test that he confessed again to killing Camille. Nonetheless, the KBI officer told Tom that he should continue lying to implicate Bledsoe. Floyd Bledsoe, however, passed his lie detector test. KBI investigator Johnson stepped in again to interfere with the investigation. Defendant Johnson falsified the results, however, inaccurately reporting that Tom had been truthful in denying his involvement in the murder, while Bledsoe had been deceptive in denying that he was involved. Based on those false polygraph results, the prosecutor dropped the charges against Tom Toms story was the central piece of the prosecutions evidence during Floyd Bledsoes trial. According to Bledsoe, prosecutors withheld anything tying Tom to the crime, fabricated a statement from Floyd that undercut his alibi, did not disclose inculpatory statements made by Tom to Floyds lawyer, and refused to search Toms home or collect any other physical evidence that might have linked Tom to the murder. After sixteen years in prison, DNA testing cleared Floyd and implicated Tom Bledsoe. Tom Bledsoe committed suicide shortly after this evidence was obtained, leaving behind a suicide note apologizing for framing his brother a note that mentioned the county attorney (Jim Vanderbilt) made him do it and told him to keep his mouth shut. Floyd Bledsoe sued the involved officers for violating his rights. The lower court refused to grant immunity to the officers, noting that the allegations raised by Bledsoe discussed police actions clearly established to be unlawful. The Tenth Circuit Appeals Court arrives at the same conclusion. The officers raised several arguments for being allowed to walk away from this wrongful conviction. The court doesnt like any of them, including this attempt to portray the railroading of an innocent man as nothing more than the good faith efforts of law enforcement officers just trying to do their job. Appellants assert that Bledsoe’s claims are facially implausible because there is an equally possible innocent explanation for their charging Bledsoe—that they honestly, but mistakenly, believed he had killed Camille and that, at most, they were negligent in investigating the crime, which is not actionable under § 1983. [] Similarly, Appellants assert that they are entitled to qualified immunity because, at most, they were mistaken in believing Bledsoe was guilty of Camille’s rape and murder, and their investigation was at most negligent. Wrong, says the Tenth Circuit. What Bledsoe alleges far exceeds the innocent actions of cops mistakenly going after the wrong perp. Those arguments mischaracterize Bledsoe’s allegations. Bledsoe alleges that Defendants fabricated false evidence against him, knowingly suppressed exculpatory evidence that would have proven his innocence, and facilitated his arrest, pretrial detention and trial without probable cause to believe he was guilty. None of those alleged actions, by definition, can be done mistakenly or “innocently.” Its pretty tough to innocently ignore a suspects multiple confessions, failed lie detector test, and previous interactions with the murder victim. In fact, the court says, theres enough in Bledsoes allegations to suggest the opposite of innocence: a conspiracy to violate his rights, one participated in by officers, investigators, and prosecutors. Bledsoe can move forward with his lawsuit. All but one claim survives the multiple defendants appeal of the lower court ruling. For the foregoing reasons, then, we conclude that Bledsoe adequately alleged that each Appellant participated in depriving him of his constitutional rights and that, except for the failure-to-intervene theory, the alleged constitutional violations were clearly established by 1999. Said another way, except for the failure-to-intervene claim, each Appellant was on notice in 1999 that their conduct, as Bledsoe has alleged it—suppressing exculpatory evidence that would have shown Bledsoe’s innocence, fabricating evidence to use against him, and using that evidence to arrest, detain and prosecute him for a crime he did not commit—was unconstitutional. The district court, thus, correctly denied each Appellant qualified immunity on Bledsoe’s substantive constitutional claims, and on his conspiracy and personal participation theories of liability. This puts Bledsoe closer to obtaining some form of justice for the injustice he spent 16 years subjected to. And this overwhelming denial of qualified immunity to multiple law enforcement defendants on multiple counts will perhaps result in a settlement being offered before this goes much further in court something that may force the involved entities to hand over evidence showing how much they screwed this innocent man. And that evidence may show this sort of behavior was routine. Theres no reason to believe it isnt. Everyone sued here seemed pretty comfortable railroading an innocent man, which suggests violating rights was just considered part of the job.

[Category: 1, 10th circuit, evidence, fabricated evidence, floyd bedsoe, kansas, kbi, qualified immunity, tom bledsoe]

[*] [-] [-] [x] [A+] [a-]  
[l] at 11/28/22 4:41pm
A clear demonstration that the EU Copyright Directive is a badly-drafted law is the fact that it has still not been implemented in national legislation by all the EU Member States three years after it was passed, and over a year after the nominal deadline for doing so. That’s largely because of the upload filters of Article 17. The requirement to block copyrighted material without authorization while fully preserving users’ rights is probably impossible to put in place in any straightforward way. As a result, national legislators have had to come up with various kinds of approximations when drawing up their local laws. Over on the Communia blog, Paul Keller has a good exploration of how the Czech Republic is tackling the issue. The current Czech proposal is particularly interesting because it is one of the first to be available after the EU’s top court, the Court of Justice of the European Union (CJEU), brought a little clarity on the safeguards that need to be included in national implementations of Article 17. Notably, the language of the latest version of the Czech law: inserts one of the core findings of the CJEU ruling — that platforms can only be required to detect and block content on the basis of the information provided by rightholders and cannot be required to block content which, in order to be found unlawful, would require an independent assessment of the content by the platforms — into the Czech implementation. While it does so by referencing concepts developed by the [Advocate General in his opinion on the CJEU case], instead of the criteria from the final judgement, it is a welcome addition that will offer a better protection to users’ rights than the literal implementation [originally] proposed by the government. Another innovation by the Czech lawmakers rightly tries to address the problem of platforms that repeatedly block or remove lawful user uploads. Unfortunately the way it proposes to do that is to shut down the entire platform. As Keller writes: While it provides a powerful incentive for platforms not to overblock, invoking this remedy would result in substantial collateral damage that negatively affects the freedom of expression of all other uses of the affected platform. So what could a more reasonable — and less harmful— remedy look like? What if instead of threatening to shut down the offending platform, [it] threatened to shut down the upload filters instead: If it would prohibit the provision of the automated content recognition (ACR) system for the purpose of blocking or removal of user uploads? Shutting down upload filters that overblock is a really good idea, since the algorithmic filters lie at the heart of the problem with Article 17. Moreover: if the scope of the injunctive relief would be limited to banning the continued provisions of overzealous upload filters, the proposed Czech implementation of Article 17 could even become a template for other Member States seeking to bring their implementations in line with the requirements of the CJEU while otherwise staying relatively close to the text of the directive. Let’s hope the Czech Republic shows the way by adopting Keller’s suggestion, and that other EU countries follow. It won’t turn the Copyright Directive into a good law – nothing could do that – but it will blunt some of its worst effects. Follow me @glynmoody on Twitter, or Mastodon. Reposted from the Walled Culture blog.

[Category: 1, article 17, cjeu, copyright, czech republic, knowledge, overblocking, upload filters]

[*] [-] [-] [x] [A+] [a-]  
[l] at 11/28/22 2:34pm
Weve written a number of posts about the problems of KOSA, the Kids Online Safety Act from Senators Richard Blumenthal and Marsha Blackburn (both of whom have fairly long and detailed histories for pushing anti-internet legislation). As with many protect the children or but think of the children! kinds of legislation, KOSA is built around moral panics and nonsense, blaming the internet any time anything bad happens, and insisting that if only this bill were in place, somehow, magically, internet companies would stop bad stuff from happening. Its fantasyland thinking, and we need to stop electing politicians who live in fantasyland. KOSA itself has not had any serious debate in Congress, nor been voted out of committee. And yet, there Blumenthal admitted he was was actively seeking to get it included in one of the must pass year end omnibus bills. When pressed about this, we heard from Senate staffers that they hadnt heard much opposition to the bill, so they figured there was no reason to stop it from moving forward. Of course, that leaves out the reality: the opposition wasnt that loud because there hadnt been any real public opportunity to debate the bill, and since until a few weeks ago it didnt appear to be moving forward, everyone was spending their time trying to fend off other awful bills. But, if supporters insist theres no opposition, well, now they need to contend with this. A coalition of over 90 organizations has sent a letter to Congress this morning explaining why KOSA is not just half-baked and not ready for prime time, but that its so poorly thought out and drafted that it will be actively harmful to many children. Notably, signatories on the letter — which include our own Copia Institute — also include the ACLU, EFF, the American Library Association and many more. It also includes many organizations who do tremendous work actually fighting to protect children, rather than pushing for showboating legislation that pretends to help children while actually doing tremendous harm. I actually think the letter pulls some punches and doesnt go far enough in explaining just how dangerous KOSA can be for kids, but it does include some hints of how bad it can be. For example, it mandates parental controls, which may be reasonable in some circumstances for younger kids, but KOSA covers teenagers as well, where this becomes a lot more problematic: While parental control tools can be important safeguards for helping young children learn to navigate the Internet, KOSA would cover older minors as well, and would have the practical effect of enabling parental surveillance of 15- and 16-year-olds. Older minors have their own independent rights to privacy and access to information, and not every parent-child dynamic is healthy or constructive. KOSA risks subjecting teens who are experiencing domestic violence and parental abuse to additional forms of digital surveillance and control that could prevent these vulnerable youth from reaching out for help or support. And by creating strong incentives to filter and enable parental control over the content minors can access, KOSA could also jeopardize young people’s access to end-to-end encrypted technologies, which they depend on to access resources related to mental health and to keep their data safe from bad actors. The letter further highlights how the vague duty of care standard in the bill will be read to require filters for most online services, but we all know how filters work out in practice. And its not good: KOSA establishes a burdensome, vague “duty of care” to prevent harms to minors for a broad range of online services that are reasonably likely to be used by a person under the age of 17. While KOSA’s aims of preventing harassment, exploitation, and mental health trauma for minors are laudable, the legislation is unfortunately likely to have damaging unintended consequences for young people. KOSA would require online services to “prevent” a set of harms to minors, which is effectively an instruction to employ broad content filtering to limit minors’ access to certain online content. Content filtering is notoriously imprecise; filtering used by schools and libraries in response to the Children’s Internet Protection Act has curtailed access to critical information such as sex education or resources for LGBTQ+ youth. Online services would face substantial pressure to over-moderate, including from state Attorneys General seeking to make political points about what kind of information is appropriate for young people. At a time when books with LGBTQ+ themes are being banned from school libraries and people providing healthcare to trans children are being falsely accused of “grooming,” KOSA would cut off another vital avenue of access to information for vulnerable youth. And we havent even gotten to the normalizing-surveillance and diminishing-privacy aspects of KOSA: Moreover, KOSA would counter-intuitively encourage platforms to collect more personal information about all users. KOSA would require platforms “reasonably likely to be used” by anyone under the age of 17—in practice, virtually all online services—to place some stringent limits on minors’ use of their service, including restricting the ability of other users to find a minor’s account and limiting features such as notifications that could increase the minor’s use of the service. However sensible these features might be for young children, they would also fundamentally undermine the utility of messaging apps, social media, dating apps, and other communications services used by adults. Service providers will thus face strong incentives to employ age verification techniques to distinguish adult from minor users, in order to apply these strict limits only to young people’s accounts. Age verification may require users to provide platforms with personally identifiable information such as date of birth and government-issued identification documents, which can threaten users’ privacy, including through the risk of data breaches, and chill their willingness to access sensitive information online because they cannot do so anonymously. Rather than age-gating privacy settings and safety tools to apply only to minors, Congress should focus on ensuring that all users, regardless of age, benefit from strong privacy protections by passing comprehensive privacy legislation. Theres even more in the letter, and Congress can no longer say theres no opposition to the bill. At the very least, sponsors of the bill (hey, Senator Blumenthal!) should be forced to respond to these many issues, rather than just spouting silly platitudes about how we must protect the children when his bill will do the exact opposite.

[Category: 1, filters, for the children, kosa, online safety, parents, privacy, richard blumenthal, surveillance]

[*] [-] [-] [x] [A+] [a-]  
[l] at 11/28/22 1:06pm
The AT&T Time Warner and DirecTV mergers were a monumental, historical disaster. AT&T spent $200 billion (including debt) to acquire both companies thinking it would dominate the video and internet ad space. Instead, the company lost 9 million subscribers in nine years, fired 50,000 employees, closed numerous popular brands (including Mad Magazine), and basically stumbled around incompetently for several years before recently spinning off the entire mess for a song. The New York Times recently published the kind of merger post mortem the media usually cant be bothered to do. The outlet spoke to dozens of individuals at the companies who detail how AT&T leadership was completely out of its depth, refused to take any advice, and blinded by the kind of hubris developed over a generation of being a government-pampered telecom monopoly. While the Trump DOJ ultimately sued to stop the deal, it was never actually due to antitrust concerns. It was because a petulant President was mad at CNN for critical media coverage. And it was also because Time Warner repeatedly refused to sell itself to Fox boss Rupert Murdoch, who likely hoped to either kill the deal or nab some divested chunks of the acquired assets. But the NYT tells an interesting tale of a meeting in which not only did Trump originally praise CNN, but AT&T promised it would fire CNN boss Jeff Zucker if the administration approved the deal: Mr. Trump kept up his Twitter diatribe against CNN and Mr. Zucker. But on June 22, Mr. Stephenson visited the White House along with other chief executives, and Mr. Trump was surprisingly effusive in his praise for the AT&T chairman, saying publicly that he had done “really a top job.” Mr. Stephenson’s warm presidential reception was shortly followed by a visit to Time Warner by Larry Solomon, the head of corporate communications for AT&T. Mr. Solomon told Mr. Ginsberg that he was there to give him a “heads up” that “we’re going to fire Jeff Zucker,” Mr. Ginsberg recalled. AT&T denies that ever happened. Then again, it also tries to claim it didnt take a total bath on the deal. The New York Times, meanwhile, seems to downplay the persistent indications that the Trump DOJs sudden and completely uncharacteristic interest in antitrust issues was not just a lawsuit driven by Trumps petty anger at CNN, but Rupert Murdochs interest in acquiring or at least harming a competitor. The Trump DOJ ultimately lost the lawsuit due to its sloppy failure to truly illustrate the consolidative harms of the deal (since again, they didnt actually care about that aspect of it). Calls by ex-DOJ officials for an investigation into the DOJ abusing its power (since again, the lawsuit was mostly about Trumps ego and helping Rupert Murdoch) went nowhere, as such things tend to do in the U.S. AT&Ts attempt to pivot from stodgy old telco to modern video advertising juggernaut completely collapsed anyway under the weight of its ego and incompetence, forcing the company to spin off DirecTV and Time Warner in various deals that continue to go badly, just in new incarnations with new names (Time Warner Disney is itself a dumpster fire for the ages). Dont feel too badly for AT&T though. Something the NYT doesnt really mention is how the Trump era in general remained an all time great one for AT&T, which not only got a $42 billion Trump tax cut for doing nothing, it received numerous regulatory favors from the Trump FCC from the gutting of net neutrality and media consolidation rules to the effective lobotomization of the FCCs consumer protection authority. The Times does mention how most of the AT&T executives who bungled the disastrous deal received massive cash payouts as punishment. But the Times downplays how the megadeal eliminated jobs for more than 50,000 (and counting) employees, and resulted in a generally shittier product for consumers (CNN under Discovery leadership is in the midst of a disastrous bid to shift its coverage from bland and feckless centrism to more right wing authoritarian appeasement and its not going great). The Times also doesnt really touch on the fact that mindless consolidation like this happens constantly. Or that U.S. antitrust enforcement is comically broken. Or that the press (including the Times) can routinely be found un-skeptically parroting supposed synergies of such deals pre-merger, helping create the problems they report on. At least we got a post-mortem, which is more than most major press outlets can be bothered to do in a country that treats disastrous, pointless mergers like a national pastime.

[Category: 1, at&t, directv, time warner, antitrust, antitrust reform, competition, consolidation, doj, media, mergers, telecom, trump]

[*] [-] [-] [x] [A+] [a-]  
[l] at 11/28/22 11:44am
Lots of people like to pretend California is home to certifiable Communists a socialist collective masquerading as a state. But California is not beholden to socialist ideals. It has its own dictatorial ideological bent, one thats only slightly tamed by its election of liberal leaders. Every move towards the left is greeted by an offset to the right. If anything, California is the Land of Compromise. Ideological shifts are short-lived. What really lasts are the things the California government does that give the government more power, even as they ensure the electorate that their concerns have been heard. Case in point: San Francisco. In early 2019, the city passed a ban on facial recognition tech use by government agencies. This move placed it on the left, at least in terms of policing the police. (The law was amended shortly thereafter when it became clear government employees were unable to validate their identity on city-issued devices.) Communist paradise indeed. But no, not really. San Franciscos lawmakers may have had some good ideas about trimming the governments surveillance powers, but those good ideas were soon compromised by law enforcement. And those compromises have been greeted with silence. In May of this year, cops were caught accessing autonomous vehicle data in the hopes of obtaining evidence in ongoing investigations. A truly autonomous vehicle creates nothing but third-party data, so there was little need to worry about Fourth Amendment implications. But still it seems a city concerned with government overreach would express a little more concern about this cop opportunism. Nothing happened in response to this revelation. Instead, four months later, city lawmakers approved on-demand access to private security cameras, reasoning that cops deserved this access because crime was still a thing. Mayor London Breed justified the move towards increased authoritarianism in a [checks notes] Medium post: We also need to make sure our police officers have the proper tools to protect public safety responsibly. The police right now are barred from accessing or monitoring live video unless there are “exigent circumstances”, which are defined as events that involve an imminent danger of serious physical injury or death. If this high standard is not met, the Police can’t use live video feed, leaving our neighborhoods and retailers vulnerable. These are the reasons why I authored this legislation. It will authorize police to use non-City cameras and camera networks to temporarily live monitor activity during significant events with public safety concerns, investigations relating to active misdemeanor and felony violations, and investigations into officer misconduct.  When the going gets tough, the elected toughs get chickenshit. All it took to generate carte blanche access to private security cameras was some blips on the crime radar. Whatever gains were made with the facial recognition tech ban were undone by the citys unwillingness to stand by its principles when isolated incidents (hyped into absurdity by news broadcasters) made certain residents feel ways about stuff. The news cycle may have cycled, but the desire to subject San Francisco to extensive government intrusion remains. If the cops cant have facial recognition tech, maybe they should be allowed to kill people by proxy. Its a super-weird take on law enforcement, but one that has been embraced by apparently super-weird city legislators, as Will Jarrett reports for Mission Local. A policy proposal heading for Board of Supervisors approval next week would explicitly authorize San Francisco police to kill suspects using robots. The new policy, which defines how the SFPD is allowed to use its military-style weapons, was put together by the police department. Over the past several weeks, it has been scrutinized by supervisors Aaron Peskin, Rafael Mandelman and Connie Chan, who together comprise the Board of Supervisors Rules Committee. Yikes. Turning residents into Sarah Connor isnt a wise use of government power. Giving police additional deadly force powers is unlikely to heal the immense rift that has developed as cops continue to kill people with disturbing frequency, all while enjoying the sort of immunity that comes with the territory. Attempts to mitigate the new threat authorized by this proposal were undermined by the San Francisco PD, which apparently thinks killing people with modified Johnny Fives is a good idea: Peskin, chair of the committee, initially attempted to limit the SFPD’s authority over the department’s robots by inserting the sentence, “Robots shall not be used as a Use of Force against any person.” The following week, the police struck out his suggestion with a thick red line. It was replaced by language that codifies the department’s authority to use lethal force via robots: “Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to SFPD.”  The edit may seem all pointy-eared-Spock logical when taken at face value. But it isnt. What cops believe poses an imminent threat to officers is so far outside the norm expected by reasonable citizens, it makes this edit meaningless. Cops are allowed to make highly-subjective judgment calls the sort of thing that often leads to unarmed people (especially minorities) being killed by law enforcement officers. Add this right-optional autonomy to autonomous killing machines and youre asking for the sort of trouble residents will be forced to subsidize as the city settles lawsuits triggered by cops who think a persons mere existence is enough of a threat to justify deadly force. Adding this to the arsenal of rights-optional weapons deployed by the SFPD ushers in a new era where cops can be judge, jury, and executioner. I mean, in many cases they already are. But this adds a level of Judge Dredd-adjacent dystopia where cops can try to claim it wasnt them but rather the one-armed man robot. The San Francisco legislator should kill this bill deader than the residents the SFPD kills. The imminent threat justification is too vague and too easily abused to allow officers to absolve their own guilt by allowing a robotic assistant to perform killings on their behalf.

[Category: 1, autonomous killing, london breed, robot police, robots, san francisco, sfpd]

[*] [-] [-] [x] [A+] [a-]  
[l] at 11/28/22 11:39am
Microsoft Office Home & Business for Mac 2021 (2-Pack) This bundle is for families, students, and small businesses who want classic MS Office apps and email. It includes Word, Excel, PowerPoint, Outlook, Teams, and OneNote. Get 2 licenses for Microsoft Office Home and Business for Mac for $55. The Million Dollar Puzzle Are you a fan of solving puzzles? Do you like winning money? Then MSCHF has a treat for you. MSCHF brings you the Million Dollar Puzzle. All you have to do is buy this 500-piece jigsaw puzzle, complete it, and get a chance to win up to $1,000,000. Get one puzzle for $20, two for $40, or 4 for $80. The Unlimited Learning Subscription Bundle Featuring Rosetta StoneContinuing to learn new skills and grow is an attainable goal for anyone with the right motivation. The Unlimited Learning Subscription Bundle includes StackSkills online courses and Rosetta Stone. StackSkills is the premier online learning platform for mastering todays most in-demand skills.  With its intuitive, immersive training method, Rosetta Stone will have you reading, writing, and speaking new languages like a natural in no time. The bundle is on sale for $149. MagStack Foldable 3-in-1 Wireless Charging Station with Floating Stand (2-Pack) The perfect on-the-go wireless charging station that also transforms into a floating stand for smartphone FaceTime or video playback while charging. The MagStack 3-in-1 wireless charging station features a foldable design with 3 wireless charging spots, enables charging for up to 3 devices simultaneously, including iPhone, Apple Watch, AirPods Pro, AirPods with Wireless Charging Case, other Qi-Compatible Android Phones, and Bluetooth earbuds. With its versatile foldable design, MagStack also folds into a space-saving single-device charger for your phone or earbuds. Neatly folded into a slim wallet-sized stack making it ultra-portable and functional for your next trip. Get two of them on sale for $88. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

[Category: 1, daily deal]

[*] [-] [-] [x] [A+] [a-]  
[l] at 11/28/22 10:16am
Theres this narrative out there that it has been decided that social media is bad for children and that it is such a big danger that regulation is needed. A few months ago, we wrote about a Berkeley professor who claimed that this was settled and that there was no longer any question as to the nature of the harm to children around the globe. In that post we went through all of the linked research showing it proved nothing of the sort. For example, lots of people rely on the reporting around the Frances Haugen leaks from inside Facebook to argue that Facebook knew that Instagram causes body image issues for children (and then most people leapt to the belief that the company then ignored and downplayed that finding). But, as we noted, the actual study told a very, very different story. As we pointed out at the time, the study was an attempt to do the right thing and understand if social media like Facebook was actually causing negative self-images among teenagers, and the study found that for the most part, the answer was absolutely not. It looked at 12 different potential issues, and surveyed teenaged boys and girls, and found that in 23 out of 24 categories, social media had little to no negative impact, and quite frequently a mostly positive impact. The only issue where the negative impact outweighed the positive impact was on body image issues for teenaged girls, and even then it was less than one-third of the teen girls who said that it made it worse for them. And the whole point of the study was to find out what areas were problematic, and which areas could be improved upon. But, again, in every other area, made it better far outranked made it worse. Of course, you might question whether or not you can believe Facebooks own research. But now the Pew Research Center, whose work tends to be impeccable, has released a study also highlighting how social media generally seems to be making teenagers lives better, not worse. Eight-in-ten teens say that what they see on social media makes them feel more connected to what’s going on in their friends’ lives, while 71% say it makes them feel like they have a place where they can show their creative side. And 67% say these platforms make them feel as if they have people who can support them through tough times. A smaller share – though still a majority – say the same for feeling more accepted. These positive sentiments are expressed by teens across demographic groups. When asked about the overall impact of social media on them personally, more teens say its effect has been mostly positive (32%) than say it has been mostly negative (9%). The largest share describes its impact in neutral terms: 59% believe social media has had neither a positive nor a negative effect on them. For teens who view social media’s effect on them as mostly positive, many describe maintaining friendships, building connections, or accessing information as main reasons they feel this way, with one teen saying: “It connects me with the world, provides an outlet to learn things I otherwise wouldn’t have access to, and allows me to discover and explore interests.” – Teen girl So, once again, the general sentiment is that for many teenagers social media improves their lives. For an even larger portion, it neither improves nor makes their lives worse. Its just a small percentage who find it problematic. And that sounds about right. For lots of people of all ages the evidence suggests that, when used well, social media is a nice and useful tool for staying connected with friends, and sometimes enabling them to express themselves better. It is true that, for some people, it becomes a challenge, and they get sucked into it, and it becomes problematic. But, honestly, given that most teenagers have periods of their teenage years where they feel isolated and alone (for which it would be easy for them to blame social media), these numbers seem astoundingly positive. The report does note that the negatives of social media are mostly around teens feeling overwhelmed because of all the drama and that it makes them feel like their friends are leaving them out of things. But, um, those feelings happened in the pre-internet days quite frequently as well. Theyre, sorta, what happens as a teenager. Its unclear that we can or should blame anything on the internet for that. Perhaps even more interesting, the Pew study suggests that all of the media coverage about social media being bad for teens has convinced kids that it must be true for others. Because theyre not really seeing it themselves. The numbers here are striking. While 32% believe that social media is mostly negative for people their age only 9% think its true for themselves personally: Theres a lot more in the study that shows there is a lot of nuance here, but one thing seems extremely clear: the idea that social media is universally bad and dangerous for kids is completely false. For many, many kids, its actually quite positive. For some its neither good nor bad. Its only bad for a small percentage who have struggled with it. And, even then, the reasons why theyre struggling sound an awful lot like the kinds of social struggles that existed long before the internet or social media existed. All of this seems to raise some pretty important questions — especially about politicians, academics, and the media who keep feeding us this moral panic that social media is unquestionably bad for teenagers. In California, weve already seen a terrifyingly problematic law pass unanimously, with it being stated in the law itself that social media was known to be dangerous for kids. Similar bills are showing up in states across the country. Meanwhile, in Congress, lawmakers are rushing to pass KOSA, the Kids Online Safety Act, build on this very same premise. And, yet, once again we see that the very factual basis for these laws is false. Stop letting policymakers and the media scare people into believing a moral panic that just isnt true. Because, in the process, were passing a variety of dangerous laws that will actually do a ton to stifle these services that many more people get true value out of, and replace it with something much more limited, with much greater surveillance.

[Category: pew research, moral panic, social media, teens]

[*] [-] [-] [x] [A+] [a-]  
[l] at 11/28/22 6:19am
Back in July, BMW raised a bit of a ruckus when the company announced that it would be making heated seats a luxury option for an additional $18 per month. Now, Mercedes aims to take the concept one step further by announcing that buyers of the companys new Mercedes EQ electric models will need to pay a $1,200 (plus taxes and fees) yearly subscription to unlock the vehicles full performance. The Drive points to Mercedes online store, where they note that buyers of the vehicle will need pay a monthly subscription to unlock an acceleration increase: According to Mercedes, the yearly fee increases the maximum horsepower and torque of the car, while also increasing overall performance. Acceleration from 0-60 mph is said to improve by 0.8-1.0 seconds and the overall characteristics of the electric motors are supposed to change as well. The extra performance is unlocked by selecting the Dynamic drive mode. As with BMWs vision, youll likely see a lot of folks with more disposable income than common sense lauding this sort of stuff as pricing and technological innovation, largely because they want to justify their desire to pay a giant company extra for what they perceive as additional status. The problem: youre buying a vehicle with this technology (whether its faster acceleration or heated seats) already in the car. The cost of that technology is always going to be wound into the existing cars price one way or another, as no manufacturer is going to take a bath on the retail price. So youre effectively paying for technology you already own to be turned on. Then, over time as subscription costs add up over the life of the vehicle you (and other later owners) own, youre are paying significantly more money for that technology than what its worth (see: paying Comcast thousands of dollars in rental fees for a modem that costs them $50). The need for quarter over quarter returns at any cost opens the door to rampant nickel-and-diming in the future, putting customers on an endless treadmill where paying to turn on technology you already own is constantly getting more expensive in a way thats just completely untethered to real-world costs. These subscription services also create an arms race with hackers and modders, with the right to repair (something you already own) debate waiting in the periphery. And the FTC is watching companies like a hawk, waiting to see if auto makers make simply enabling something you already own a warranty violation.

[Category: 1, mercedes, automotive, cars, electric cars, ftc, hardware, heated seats, ownership, right to repair, subscription service, warranty]

[*] [-] [-] [x] [A+] [a-]  
[l] at 11/27/22 1:00pm
This week, our first place winner on the insightful side is WarioBarker, responding to our post that suggested Elon Musks fans will never realize his actions at Twitter are exactly what they accused Jack Dorsey of doing before: I disagree – they’ll recognize it, but consider it perfectly fine because Musk’s on their side. In second place, its Rocky responding to a comment defending Musks silly Twitter poll about reinstating Trump: We were told that the vast majority wanted Trump gone. Trump was an extremist. The poll dispelled the myth, and demonstrates that, despite years of demonization, he’s still in the majority. No, you weren’t told that the vast majority wanted Trump gone, you were told that many wanted Trump gone and he was banned because he repeatedly broke the TOS and then he instigated an insurrection. Once again you are doing your history revisionism, I guess someone rooting for a liar see no problem in promulgating lies themselves. But at least it was 40 hours that the platform was protected from disinformation. Give us examples of disinformation that CBS has published. Go ahead, make our day. My guess is that you wont since you are a cowardly liar that’ll refuse to answer as usual. For editors choice on the insightful side, we start out Bloof and one more comment about that Twitter poll: Isn’t it amazing how the alleged twitter bot problem stopped being one the moment he realized that he could use them to justify a decision he was already planning to make? Next, its That One Guy with a comment about Musk admitting his content moderation council was a charade to lure back advertisers: Lying to the people who pay you, never a good look Good luck getting any advertisers but the bottom of the barrel dregs to stay on or sign back on after that. When you lie to advertisers about how you’re going to keep the site from turning into a cesspit, gleefully help it become a cesspit and then get outed as lying only a complete and utter fool at the companies involved in ads should take you at your word at that point. At this point it’s not a matter of will Twitter burn to the ground, that is ensured so long as Elon helms it and it’s almost certainly well past saving even if he tried to offload it to some sucker today, rather the question is how quickly he’ll manage to destroy it. Over on the funny side, our first place winner is Michael Craig with a comment about Musks hands-on approach to content moderation: I’m pretty sure Elon has to make these moderation decisions as he is likely the only person still working at Twitter. In second place, its yet another comment about the Trump poll, this time from an anonymous commenter: Vox Populi, Vox Dei A person is smart. People are dumb panicky, dangerous animals, and you know it. — Agent Kay “The voice of the people is the voice of God.” This does not bode well for God. For editors choice on the funny side, I guess we might as well stay on the Musk theme. First, its That Anonymous Coward with a comment about the unbanning of Trump: Perhaps this is why he allowed Trump to come back, so he would have a bigger whining victim than himself on the platform. Finally, its an anonymous comment about all the big advertisers fleeing Twitter: I find it quite hilarious that the guy who was trying to get to Mars lost Mars. Thats all for this week, folks!
[*] [-] [-] [x] [A+] [a-]  
[l] at 11/26/22 1:00pm
Five Years Ago This week in 2017, it became clear the FCC was gearing up for an attempt to hide its attack on net neutrality just before the Thanksgiving weekend. While Comcast once again falsely claimed there was nothing to worry about, the agency did exactly that and released its order on Wednesday. Meanwhile, the DOJ sued to kill the AT&T/Time Warner merger, tech experts were calling on the DHS & ICE to stop their social media surveillance and extreme vetting, and a Q&A about the NSAs Section 702 program glossed over the problems with incidental collection and domestic surveillance. Ten Years Ago This week in 2012, Rep. Zoe Lofgren was turning to Reddit for help putting together an anti-SOPA bill, Rep. Darrell Issa was trying to correct the Copyright Offices omission and codify the right to rip DVDs, and Senator Patrick Leahy was ready to cave to law enforcement and carve out warrantless spying loopholes from his privacy reform bill. The DOJ was using the fact that Megaupload helped out in the NinjaVideo prosecution against the company, the founder of SurfTheChannel got extra jail time in the UK for revealing documents that raised questions about his conviction, and Malibu Media was getting some serious pushback against its porn trolling operation. Also, for a special Thanksgiving post, we looked at some of the silliest turkey-related patents over the years. Fifteen Years Ago This week in 2007, the marketing push was on for Amazons brand new Kindle device, and we looked at some of its biggest problems as well as the reasons it might be a turning point for non-phone wireless devices. The writers strike was shining a spotlight on Hollywoods many new competitors, the MPAA was defending its push to force colleges to fight against file sharing, the FSF set up a fund to pay experts who could push back against the RIAAs evidence in lawsuits, and music retailers were begging the recording industry to drop DRM. We also looked at the skewed perspectives and motivations behind the latest report claiming the internet would soon collapse.

[Category: 1, history, look back]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/23/22 8:39pm
Two years ago, the DOJ opened up an investigation into the Springfield, Massachusetts police department, targeting its troubled Narcotics Unit. Like far too many other drug-focused units, the Springfield Narcotics Unit was filled with officers who routinely engaged rights violations. Narcotics Bureau officers regularly punch subjects in the head and neck area without legal justification. The routine reliance on punches during arrests and other encounters that we discovered during our investigation indicates a propensity to use force impulsively rather than tactically, and as part of a command-and-control approach to force rather than an approach that employs force only as needed to respond to a concrete threat. […] Contrary to law, SPD policy, and national standards, Narcotics Bureau officers routinely resort to punching subjects’ head areas with closed fists as an immediate response to resistance without attempting to obtain compliance through other less serious uses of force. Out of all 84 Narcotics Bureau Prisoner Injury Files from 2013 through 2019, roughly 19% of the uses of force reviewed included punches to subjects’ heads, and approximately an additional 8% involved injuries to subjects’ heads from another form of a head strike. In a significant number of these cases, such force was unreasonable. This investigation expanded to cover the rest of the Springfield PD, which the DOJ reasonably believed also contained problematic officers. Two years later, the DOJ applied a consent decree aimed at bringing the PD back in line with the Constitution. Whether this will change the way the PD handles its bad cops is anyones guess, but the PDs decision to simply rebrand the Narcotics Unit as the Firearms Investigation Unit suggests this wont be the last time the DOJ will be visiting Springfield. Its not just a Springfield problem. Its a Massachusetts problem. More to the point, its a law enforcement problem. If theres a silver lining, this new investigation of a Massachusetts police department (separated by only a few months from the Springfield investigation) suggests DOJ investigators never got a chance to board a flight out of the state, perhaps saving taxpayers a bit of cash. Just months after the Justice Department concluded a widespread investigation of police brutality in the Bay State’s third-largest city, Springfield, it opened a new one Tuesday in its second-largest city, Worcester. In addition to studying what it called a pattern or practice of excessive force by Worcester cops, the department said Tuesday it will investigate whether there has been discriminatory enforcement based on race and sex.  The Worcester PD has problems. The DOJ didnt provide many specifics in its announcement of this investigation, but theres plenty of information out there that fills in the gaps in the DOJs vague narrative. Perhaps one of the most innocuous allegations is that one officer, Rodrigo Oliveira, routinely held loud, annoying, and crowded parties at his home parties where guests tended to wander the street annoying people and neighbors, hoping to have a peaceful nights rest, saw those plans go to waste as Oliveira and his guests got wasted. But, if youre not willing to internally police the small things, its unlikely youre willing to hold officers accountable for the rights violations they commit while on the clock. The lieutenant instructed neighbors to call police if the problems persisted. He also alerted dispatch that a supervisor should always respond to Oliveira’s address for future calls. “Officer Oliveira said that he understood,” the internal affairs report said. In January 2020, the report concluded Oliveira was “exonerated” from the allegations of “discourtesy” and “awareness of activities.” However, records show the parties and the 911 calls continued, even as the COVID-19 pandemic arrived. An incident history at Oliveira’s address listed eight different “loud party” calls after the internal investigation. Then theres wrongful arrest lawsuit, filed by Dana Gaul after Worcester cops decided he fit the description, even though he didnt actually fit the description. Witnesses at the scene described the perpetrator as a thin, light-skinned or white man, about 5 feet, 7 inches (1.7 meters) tall, while Gaul is Black, weighs 200 pounds (91 kilograms) pounds and is 5 feet, 10 inches (1.8 meters) tall, his lawyers said. [] Investigators coerced some people — none of whom were actually at the scene of the stabbing — into saying that grainy surveillance video of the suspect looked like Gaul, according to the suit filed by Debra Loevy and Mark Reyes. In addition, DNA found on the victim’s body and clothes was compared to Gaul’s DNA, but did not match, according to his lawyers. Gaul did not know Rose and was nowhere near the scene of the stabbing, his lawyers said. It also appears the city is willing to run interference for the PD in order to hide evidence of wrongdoing from public records requesters. A judge excoriated Worcester for its unlawful three-year campaign to keep police misconduct records secret from a local newspaper, writing in a recent ruling that a city lawyer attempted to mislead the court and “did not act in good faith.” Worcester Superior Court Judge Janet Kenton-Walker ordered the city to pay $101,000 to cover the legal fees of its paper of record, the Telegram & Gazette. To hold the city accountable for its intransigence, she also ordered it to pay $5,000 in punitive damages. It is the third time in two decades the T&G has taken the city to court over the issue of police-misconduct records—and the third time the newspaper has succeeded. Three times. If you want some pattern and practice evidence, this string of lawsuits over PD opacity will provide the DOJ with some investigative ammo. Heres more: city residents are on the hook for an $8 million settlement in another wrongful conviction lawsuit. That wont give 16 years of freedom back to the wrongfully accused man, but its a start. A jury has awarded Natale Cosenza, of Worcester, $8 million and $30,000 in punitive damages in a lawsuit involving two Worcester Police sergeants. The jury found that Sergeant Kerry Hazelhurst concealed evidence and fabricated evidence in the case that led to Cosenzas conviction. The jury also found that Hazelhurst and Sergeant John Doherty conspired to conceal and fabricate evidence. Six others from the Worcester Police Department were removed from the original complaint prior to trial. Cosenza served 16 years in prison for assault and battery with a dangerous weapon and armed burglary of a woman before being released in 2016. More pattern and practice evidence: this case involved an officer who contributed to another wrongful conviction. Doherty was one of the interrogators who extracted a confession from then 16-year-old Nga Truong in 2008. Truong spent three years behind bars awaiting trial before a judge determined the confession was the product of deception, trickery and implied promises to a frightened teenager, according to WBUR. The City of Worcester settled that lawsuit in 2016 for $2.1 million. The Worcester PD has generated dozens of civil rights lawsuits and forced residents to pay out millions in settlements. This history of abuse prompted residents to petition the DOJ to investigate the department. Whether or not this petition was instrumental in the DOJs investigation is unknown, but the end result is a department that has ceded control to its officers, no matter how much damage they create, will have to explain to federal investigators why its workforce is so terrible at respecting rights. The end of this investigation is still years off. And it will probably be another half-decade before the DOJ secures a consent decree that has only a slim chance of actually reforming the Worcester PD. But, for now, the Worcester PD is generating national headlines for all the wrong reasons. Hopefully, that will generate the heat and friction needed to start moving the department towards a better relationship with the people it serves and an increased respect for their rights.

[Category: 1, consent decree, doj, massachusetts, police, springfield, worcester]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/23/22 4:32pm
When people speak of culture, and preserving it, they usually mean the works of recognized artistic giants like Shakespeare, Leonardo Da Vinci, Charlie Chaplin, and Miles Davis. They rarely mean things like live streams of Korean pop music, generally known as K-pop. And yet K-pop is undoubtedly an expression – some would say a particularly vibrant expression – of a characteristic modern culture. It is also subject to copyright, which brings with it problems, as this story on Mashable reveals: On Monday, Oct. 31, South Korean live streaming app V Live notified users that it’d be shutting down on Dec. 31, 2022. The closure isn’t a surprise — in March, HYBE, owner of the competing app Weverse, announced it had acquired V Live and intended to close the app — but it is a bummer for artists and fans. V Live is the largest-ever archive of live-streamed K-pop content. Where will that content live on when the app goes dark? Owned by Naver, V Live launched in 2015 as a tool for Korean artists to connect with fans. They did that primarily through live streams, which were then saved in the app as on-demand videos. As K-pop exploded in global popularity, V Live connected these entertainers with an international audience who watched them eat meals, celebrate birthdays, and produce music in real time. V Live is therefore a great example of how artists can use the latest technology to forge closer relationships with their fans around the world – something that Walled Culture has been advocating as a key element of finding new ways to fund creativity. According to the Mashable article, some of the recordings will be moved to Weverse’s own platform. Specifically, recordings of artists who join Weverse before V Live is shut down. Weverse has also said that artists can download their V Live archives in order to upload them elsewhere. That’s all well and good, but it still leaves many musicians facing the possibility of their streams disappearing forever, because they are unable to move them to new sites for whatever reason. One issue in this story is the concentration of power in this sector, a typical problem that bedevils most of the copyright world, as I discuss in Walled Culture, the book. The main problem, though, is copyright itself. In a sane world, relevant cultural organisations would be able to download all of the streams on the V Live site as a matter of routine in order to preserve them for posterity, as important cultural artefacts of the K-pop world. Copyright naturally forbids that, seeing preservation as infringement. As a result, K-pop culture is likely to lose some of its characteristic moments, for no good reason, and to no one’s benefit. Follow me @glynmoody on Twitter, or Mastodon. Reposted from the Walled Culture blog.

[Category: naver, weverse, copyright, culture, k-pop, v live]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/23/22 2:31pm
This week, we have a special joint episode with The Neoliberal Podcast, discussing the question on a lot of minds: just what the hell is going on at Twitter now that Elon Musk is in charge? Hes owned the company for less than a month, and its already in chaos. Mike sits down with Neoliberal Podcast host Jeremiah Johnson to discuss why content moderation is so difficult at scale, whether Mastodon can be a real Twitter replacement, Elons erratic and dumb moves, and the big question: whether or not Twitter might die. Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts or Spotify, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

[Category: 1, twitter, elon musk, jeremiah johnson, podcast, social media]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/23/22 1:05pm
A new report claims that more than a third of Twitters biggest advertisers have now pulled their ads from the platform, as the unstable and unpredictable nature of the new owner, combined with his implicit encouragement for hate, has made the site less and less welcoming to the brands with money to spend. Dozens of top Twitter advertisers, including 14 of the top 50, have stopped advertising in the few weeks since Musk’s chaotic acquisition of the social media company, according to The Post’s analysis of data from Pathmatics, which offers brand analysis on digital marketing trends. Ads for blue-chip brands including Jeep and Mars candy, whose corporate parents were among the top 100 U.S. advertisers on the site in the six months before Musk’s purchase, haven’t appeared there since at least Nov. 7, the analysis found. Musk assumed ownership of the site Oct. 27. [.] Pharmaceutical company Merck, cereal maker Kellogg, Verizon and Samuel Adams brewer Boston Beer also have stopped their advertising in recent weeks, the Pathmatics data shows. The companies didn’t respond to requests for comment from The Post. The article notes that the timing of all this isnt great, especially as the World Cup is happening, which historically has driven tons of traffic to Twitter. And if you want to know why advertisers are running away, perhaps its because hate speech is now super viral on the site, as compared to under the previous regime. Another study, from Tufts, which has access to the Twitter firehose, has found that hatred and harassment is now found a lot more often in tweets that are going viral. For the months prior to Musk’s takeover, the researchers deemed just one tweet out of the three top 20 lists to be actually hateful, in this case against Jewish people. The others were either quoting another person’s hateful remarks or using the relevant key words in a non-hateful way. In the weeks after Musk took over Twitter, the same analysis found that hateful tweets became much more prominent among the most popular tweets with potentially toxic language. For tweets using words associated with anti-LGBTQ+ or antisemitic posts, seven of the top 20 posts in each category were now hateful. For popular tweets using potentially racist language, one of the top 20 was judged to be hate speech. “The toxicity of Twitter has severely increased post-Musk’s walking into that building,” says Bhaskar Chakravorti, dean of global business at the Fletcher Business School at Tufts University and chair of Digital Planet, which carried out the analysis. And, thats not surprising considering that Musk continually gives both tacit and explicit approval to those who are spewing hatred. He mocked the head of the ADL earlier this week, which resulted in tons of tweets that I saw from people gleefully talking about how Musk clearly supported their anti-semitic views. He regularly interacts with folks who have pushed similar hatred and harassment campaigns, pretty explicitly suggesting that they are trustworthy accounts, rather than culture war grifters. The whole situation is bizarre, frankly. At the same moment that hes complaining about advertisers bailing, and refusing to take responsibility for them doing so, hes egging on the hate and harassment. Its unclear if he just doesnt understand any of this, doesnt care, or just assumes that the hate and harassment will lead to more usage, and that will magically make the advertisers come back? In the meantime, Ive noticed that each time I check in at Twitter, the ad quality gets worse and worse. It used to be big brands with fairly ignorable ads. Now I keep seeing random religious ads, including one that literally was just a tweet praising God, and another for Scientology. I somehow doubt those are going to make up the difference in a way that allows Musk to pay his creditors.

[Category: twitter, advertisers, business models, elon musk, hate]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/23/22 11:47am
Soon after Musk took over Twitter, he announced that no big content moderation changes would occur until after he had convened a content moderation council made up of diverse perspectives. Never mind that Twitter had actually done that years earlier. Musk will reinvent anything and take credit for it if he can. Of course, it always seemed obvious that this whole plan was bullshit, and he put an exclamation point on that last week with his silly poll to determine whether or not he should reinstate Donald Trumps account. As we noted, the issue there was not so much the decision, but the process that showed how little Musk cared about the process, and simply was gunning for attention. Now were learning a bit more about Musks canceled plans for the council. First, in meetings over the weekend he told Twitter employees that the council was always there just as cover for his own decisions, and hed be the final arbiter of what stays up and what comes down: “We are going to do a content council, but it’s an advisory council,” Musk said in the call. “It’s not a… They’re not the ones who actually… At the end of the day it will be me deciding, and like any pretense to the contrary is simply not true. Because obviously I could choose who’s on that content council and I don’t need to listen to what they say.” You almost have to watch the video of the call that TMZ obtained, because hes practically holding back the laughter noting how silly it was that anyone bought into the idea: Again, this is exactly what I said in my earlier post about why this is both deeply ironic and simultaneously ridiculous. For years, tons of people have believed, falsely, that it was the CEOs of these social media companies making the final call on what stays up and what stays down. Yes, in the most extreme cases some issues may eventually be raised with the CEO, but for the most part, these companies put in place policies and enforcement procedures, and do their best. Yes, this frequently leads to mistakes, but it wasnt Jack Dorsey saying I dont like this persons views, no matter how much the online outrage factory insisted that was the case. Indeed, part of the reason those same folks got so excited about Musk taking over, was that they believed (falsely) that he was going to get rid of all the moderation and so theyd be freed. Instead, what they have is exactly what they falsely feared was happening before: an impulsive, moody, vindictive billionaire, enforcing his own personal views on moderation. Its deeply ironic, but his supporters will never recognize that Musk is doing exactly what they falsely believed Dorsey was doing before. Its also deeply stupid, because no CEO should be engaged in such day to day decision making on content moderation questions. The flow of questions is absolutely overwhelming. Finally, the claim hes making that because hell pick whos on the board, theres no possible way it couldnt be doing his whims suggests (yet again) his lack of knowledge for other approaches. I mean, its almost hilarious to see him stumble through this when Mark Zuckerberg spent a year and over $100 million to try to figure out a fair process to set up the Oversight Board in a manner that wouldnt be seen as being a mere charade. They did that with a clear charter and a binding agreement on certain (not enough!) decisions, careful interviewing of tons of potential members of the board, and a start where the initial four members of the board were chosen carefully to show a pretty balanced range of opinions, and then allowing those four members to choose future members, taking the issue out of Zucks hands. And even then tons of people insisted that the whole thing was a charade to rubberstamp whatever Zuck wanted (which has not proven to be the case!). It seems clear Musk doesnt know or doesnt care about any of that. All of this confirms what I said earlier — and what Twitters former head of trust & safety said in explaining why he quit — there is no principle behind Musks plans. It is entirely based on the whims of one man. Then, to make it even dumber (because, yes, it can always get dumber). Elon explained on Twitter that he only announced the whole content moderation council thing to get activists off his back. In a tweet, he said that: A large coalition of political/social activist groups agreed not to try to kill Twitter by starving us of advertising revenue if I agreed to this condition. They broke the deal. Except thats complete bullshit, yet again. CNBC spoke to the various activists groups that Musk met with soon after taking over, and they say its absolutely bullshit. Derrick Johnson, CEO of the National Association for the Advancement of Colored People, said in response to Musk’s claims on Tuesday that the civil rights groups “would never make such a deal” and that “Democracy always comes first.” [.] In a statement to CNBC, the Gay & Lesbian Alliance Against Defamation and Free Press echoed Johnson’s sentiment and said there was “no such deal” with Musk. “Musk is losing advertisers because he’s acted irresponsibly, slashing content moderation teams that help keep brands safe and gutting the very sales teams responsible for maintaining relationships with advertisers,” the Free Press said in a statement. “The main person responsible for the Twitter advertiser exodus is Elon Musk. Musk simply cannot admit that his own fuckups scared away advertisers. I know of no activists that are trying to kill Twitter by starving the company of advertising revenue. Musk has all the power to get the advertisers back on board: stop fucking up. Stop making it unsafe for brands to have their ads on the site. Hes refused to do that. Hes repeatedly made moves that have encouraged the worst people to gleefully harass and abuse others. And he seems to take great joy in it. His decisions have caused the advertisers to pull back. The activists arent trying to kill Twitter. Theyre trying to make it clear to Musk why he needs to actually do the right thing. But Musk cant accept the blame for his own fuckups, so he blames the activists (in earlier tweets, he threatened to sue them for tortious interference which would be a hilarious lawsuit that hed lose in embarrassingly bad fashion). He could easily bring the advertisers back if he just stopped making the site inhospitable for them. And maybe didnt fire all the sales folks who had the necessary relationships. Of course, now his big idea to fix the trust and safety thing is. to let the AI handle it. Thats from a huge, and very interesting Washington Post article that gives detail after detail about just how clueless Musk and his entourage are about content moderation, and how nearly the entire trust & safety team is gone. It also explains how they even had plans to do things like reinstate previously banned accounts with content warning labels (like has been done with politicians in the past), and Musk had endorsed that plan before just ignoring it and simply reinstating the accounts he liked. Its all about the whims of a man who doesnt care to understand the nuances and tradeoffs of what hes doing and is incredibly prone to following random impulses. But the biggest takeaway from the article is that he still seems to think that trust & safety can be automated: Now, Musk is looking to automate much of the Trust and Safety team’s work to police content — eliminating some of the nuance from complicated decisions for acheaperapproach. That will make anyone with any experience in any of this laugh. AI is a tool. And lots of trust & safety teams use it. But they use it in conjunction with a well trained staff and clear policies (indeed, thats also some of how the AI learns). Without that, the AI makes tons of ridiculous mistakes. Its not a solution that enables you to get rid of entire teams. Meanwhile, stories like this are not going to bring advertisers back: Virtually, the entire team dedicated to rooting out covert foreign influence operations was fired or quit, putting in jeopardy the company’s abilities to detect accounts including those attempting to influence U.S. politics. Its not the activists, Elon. Youre doing this to yourself.

[Category: twitter, content moderation, content moderation council, elon musk, trust & safety]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/23/22 11:42am
Mindfulness.com makes mindful living easy, practical, and simple to use in everyday life. You’ll learn science-based skills that leading health experts from around the world are teaching as part of the modern day mental health toolkit. Make good sleep a habit, be more in touch with yourself, and learn so much more to help you improve your overall perspective in life. Its on sale for $70. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

[Category: 1, daily deal]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/23/22 10:34am
To save the children, we must destroy everything. Thats the reality of the EARN IT Act. I mean, you can get some sort of sense of what youre in store for just by reading the actual words behind the extremely labored acronym: Eliminating Abuse and Rampant Neglect of Interactive Technologies Act. Whew. Its a mouthful. And, given the name, it seems like this would be Congress putting funding towards supporting moderation efforts that target abusive content. But its nothing like that. Its all about punishing tech companies for the acts of their users. Like FOSTA before it, the bill has zero interest in actually targeting the creators and distributors of illegal content, like child sexual abuse material (CSAM). Instead, its only interested in allowing prosecutors to go after the easiest entities to locate: sites that rely on or facilitate the distribution of third-party content. Specifically, the new bill makes a change to Section 230 that looks similar to the change that was made with FOSTA, saying that you don’t get 230 protections if you advertise, promote, present, distribute, or solicit CSAM. But here’s the thing: CSAM is already a federal crime and all federal crimes are already exempted from Section 230. On top of that, it’s not as if there are a bunch of cases anyone can trot out as examples of Section 230 getting in the way of CSAM prosecutions. There’s literally no evidence that this is needed or will help — because it won’t. As we’ve detailed before, the real scandal in all of this is not that internet companies are facilitating CSAM, but that the DOJ has literally ignored its Congressional mandate to go after those engaged in CSAM production and distribution. Congress tasked the DOJ with tackling CSAM and the DOJ has just not done it. The DOJ was required to compile data and set goals to eliminate CSAM… and has just not done it. That’s why it’s bizarre that EARN IT is getting all of the attention rather than an alternative bill from Senators Wyden, Gillibrand, Casey and Brown that would tell the DOJ to actually get serious about doing its job with regards to CSAM, rather than blaming everyone else. The bills proponents continue to defend the bill, casually ignoring that not only does it encourage social media sites to engage in no moderation (lest they trigger the knowledge clauses), but its also intended to undermine encryption not just by portraying it as something that mainly benefits sexual abusers of children but by introducing incentives that discourage the implementation of end-to-end encryption. In fact, any attempts made to moderate and eliminate illegal content could subject companies to fines because the safest route given the bills mandates is to do nothing. How this will help limit the spread of CSAM and help track down the producers of this content is left to everyones imagination. Those backing the bill simply assume that stripping immunity from hosts of third-party content will do the trick. They also imagine making all internet users less safe is an acceptable trade-off for limited visibility of CSAM distribution, something thats going to push CSAM producers to sites not under US jurisdiction (making them tougher to find) and make everyone else using the internet and social media services for purely legal reasons less secure. Plenty has been said about this truly terrible piece of legislation here at Techdirt. Theres plenty more being said elsewhere as well. The Internet Society has released its critique of the EARN IT Act. Guess what? Its extremely critical. At stake is the privacy and security of millions of internet users. On the other side are opportunistic legislators who feel doing something is the same thing as doing something useful. The legislators are wrong. EARN IT will fuck up the internet and its users by turning encryption into a liability. The EARN IT Act threatens a company’s ability to use and offer end-to-end encryption by putting their liability immunity at risk if they do not proactively monitor and filter for illegal user content. In doing so, it threatens the security, privacy, and safety of billions of people in the U.S. and worldwide who rely on encryption as a foundation for security online. End-to-end encryption (E2EE) is the strongest digital security shield to keep communications and information confidential between the sender and intended receivers. When used correctly, no third party – including the service provider– has the keys to access or monitor content. If passed into law, the EARN IT Act will directly threaten online service providers and Internet intermediaries, which are entities who facilitate interactions on the Internet, that supply or support encrypted services. It will also create risks for Internet infrastructure intermediaries – such as Internet Service Providers and others – that have no direct involvement in providing encrypted services. The bill holds providers liable for user content and communications. To avoid this liability, proactive measures would need to be taken. When it comes to encrypted communications, none of the options are good under EARN IT. The options would range from on-demand encryption-breaking services to facilitate government investigations, removing one end of the end-to-end encryption entirely to monitor content, or just saying the hell with it and refusing to offer encryption. None of these benefit the hundreds of millions of Americans who dont create or distribute illegal content. Undermining use of encryption makes people and businesses more vulnerable to criminal activity, and indeed preventing minors from encrypting their communications would make them more at risk of harm, not less. That’s because preventing companies from using E2EE and offering secure services would undermine security and confidentiality online. This would put millions of law-abiding people in the U.S. – including marginalized groups and children – and billions more worldwide, at greater risk of harm from those seeking to exploit private data for harm.  The latent threat to users and platforms is that the government will decide, post-passage, what best practices companies will have to use to detect, report, and remove CSAM. The problem is the governments intercession, which makes Section 230 immunity reliant on compliance with a set of the rules that will add feature creep to the slippery slope. With entities like the FBI continually agitating for encryption backdoors, it will only be a matter of time before the best practices include content scanning of some sort, which means end-to-end encryption will no longer be an option. EARN IT doesnt explicitly make encryption illegal but its mandates and wording may make the use of encryption close enough to a crime to hold companies liable for the actions of their users. While offering end-to-end encryption in itself is not a crime, the EARN IT Act makes it possible for a court to use encryption as evidence to find a service provider liable in cases related to CSAM. If a user disseminates CSAM and violates Title 18 sections 2252, 2252a, or 2256(8) using an encrypted service, a court could determine the service provider’s offering of encryption makes it liable for negligently or recklessly distributing CSAM because the encryption prevented the service provider from detecting and then blocking CSAM sent by its users – even if the service provider had no knowledge of particular CSAM being transmitted. A service provider offering E2EE is not aware of and does not have access to the content or communications shared or published online. As such, a court might consider this use of E2EE to determine whether the provider was in reckless disregard of CSAM distributed on its platform or was negligent in permitting its dissemination. Indeed, under the EARN IT Act, a state law could explicitly say that offering an encrypted service could be viewed as evidence of negligence or willful ignorance of CSAM transmission (without ever running afoul of the asserted “carveout” included in the EARN IT Act). Encryption is more than a way to secure communications. Its also a way to provide security and privacy for users interacting with other services that dont connect them to other human beings. The bill wont just bring the pain to WhatsApp and its competitors. It will make every intermediary no matter how disconnected from the production/distribution of criminal content possibly liable. And it will give prosecutors a long list of entities to punish, none of which actually produced or uploaded the content. The EARN IT Act hinders the ability of intermediaries to use a critical community-adopted building block for Internet security: encryption. It does so by creating liability risk to the intermediary that cannot monitor content users share, store, or publish online. State laws could seek to impose civil liability on every party involved in the creation, carriage, or storage of communications, including ISPs, web hosting providers, cloud backup services, and encrypted communications services like WhatsApp. [] Furthermore, in the face of civil liability for damages under state laws permitted by the EARN IT Act, network operators could decide to stop carrying encrypted traffic or take other actions to block such traffic to avoid the risk of liability. Doing so would make them less interoperable with networks carrying E2EE traffic. Without interoperability, Internet users may experience slower and less secure web browsing. This is certainly not the intent of the authors and supporters of the bill. Or, at least, it isnt an intent any of them would admit to. Chances are, most of the bills backers havent thought about it long enough to consider the undesirable side effects of hitching immunity to government mandates. Others may simply see this as a good way to discourage use of encryption under the mistaken assumption that it will make it easier for investigators to track down child abusers. All of these assumptions are wrong. And there is certainly a small percentage of bill supporters who see these negative consequences and like them people who not only dont understand the internet and social media platforms, but have converted their ignorance into fear. The problem is, theres only a few of them and millions of us. In theory, that means we have the upper hand. Unfortunately, when it comes to government work, its top down, which means the few decide what the rest of use have to live with.

[Category: 1, csam, earn it, privacy, security]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/23/22 7:18am
For the last few years, Apple has worked overtime trying to market itself as a more privacy-focused company. 40-foot billboards of the iPhone with the slogan “Privacy. That’s iPhone” have been a key part of company marketing. The only problem: researchers keep highlighting how a lot of Apple’s well-hyped privacy changes are performative in nature. The companys do not track button received endless hype for being a privacy game changer, yet far less discussed have been revelations that the button doesnt actually work, numerous apps have found ways to dodge the restrictions, and Apple does a generally shitty job holding those app makers to account. The same thing has been found of Apples iPhone Analytics settings, which makes an explicit promise to users that if they flip a button, they’ll be able to “disable the sharing of Device Analytics altogether.” But researchers have now shown that’s really not true either, and the app store and other Apple apps collect oodles of personal and information data even when you ask them not to. And now theres another report emerging from app security researchers Tommy Mysk and Talal Haj Bakry showing that Apples iPhone Analytics setting also masks the use of a Directory Services Identifier, or DSID, to track and link user information/data despite specific claims by Apple thats not happening: The privacy policy governing Apple’s device analytics says the “none of the collected information identifies you personally.” But an analysis of the data sent to Apple shows it includes a permanent, unchangeable ID number called a Directory Services Identifier, or DSID, according to researchers from the software company Mysk. Apple collects that same ID number along with information for your Apple ID, which means the DSID is directly tied to your full name, phone number, birth date, email address and more, according to Mysk’s tests. Apples response to all of these reports has been to just not comment, which is certainly much easier in a tech media environment that generally prioritizes gadgets, money, and influencer unboxing videos over consumer welfare and overall market health. Recall, Apple proclaims that personal data is either not logged at all, is subject to privacy preserving techniques such as differential privacy, or is removed from any reports before they’re sent to Apple.” Yet here, once again, youve got researchers showing this simply isnt true and user control is an illusion: “Knowing the DSID is like knowing your name. It’s one-to-one to your identity,” said Tommy Mysk, an app developer and security researcher, who ran the test along with his partner Talal Haj Bakry. “All these detailed analytics are going to be linked directly to you. And that’s a problem, because there’s no way to switch it off.” These revelations see way less press coverage than Apples purported dedication to privacy, which has seen just endless waves of hype and adoration across the tech press. It was generally helped by Mark Zuckerbergs hyperbolic claims that Apples modest privacy changes were directly responsible for Facebook/Metas cash problems, not say, Mark Zuckerberg. The reality remains that regardless of what they say, none of the big app makers, telecoms, hardware giants, or data brokers making billions upon billions of dollars on the backs of the feebly unregulated data collection and monetization sector are going to implement meaningful changes that cost them billions in revenue, even if reform is essential to happy customers, working markets, and national security. But the U.S. government is simply too corrupt to pass even a baseline privacy law for the Internet era that erects meaningful penalties for sloppy privacy and security practices. So what we get instead is kind of a dumb marketing performance that a tech press, that makes most of its money from gadget hype clicks and ads, doesnt have a lot of financial incentive to meaningfully criticize.

[Category: 1, apple, consumer rights, dsid, privacy, privacy law, privacy policies, telecom, wireless]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/22/22 8:55pm
You may think you can take a hands-off approach to local law enforcement. But youd be wrong. Trusting the police to police themselves has never worked out. If you dont end up targeted by a DOJ investigation, all the work you didnt do to oversee your police officers can (and will) be used against you in a court of law. Welcome to Euclid, Ohio, a city of 50,000 that is home to a problematic police department. A couple of years ago, the Sixth Circuit Appeals Court stripped immunity from plainclothes officers who beat a black man after deliberately placing themselves into enough danger to justify the excessive force deployment. It started with a pretextual stop in which the officers claimed Lamar Wright failed to deploy his turn signal. (The court noted no dash cam footage was available to verify this claim.) The officers body cameras, however, captured what happened next: [Officer] Flagg then tried to pull Wright from the vehicle, but the latter had difficulty getting out. As noted, Wright had recently undergone surgery for diverticulitis, which required staples in his stomach and a colostomy bag attached to his abdomen. Though the officers apparently could not see the bag and staples, these items prevented Wright from easily moving from his seat. Wright placed his right hand on the center console of the car to better situate his torso to exit the car. By this point Williams had moved over to stand behind Flagg on the driver’s side. Williams responded to Wright’s hand movement by reaching around Flagg to pepper-spray Wright at point-blank range. Flagg simultaneously deployed his taser into Wright’s abdomen. The besieged detainee finally managed to exit the car with his hands up. He then was forced face down on the ground, where he explained to officers that he had a “shit bag” on. Officer Williams next handcuffed Wright while he was on the ground. The appeals court said the officers actions werent justified by Wrights actions nor anything else they had observed before they performed the stop. Going further, they allowed claims against the city of Euclid to move forward. The city was also potentially at fault, the court said, citing Euclid PD use of force training materials obtained by Wright materials that included jokes about excessive force and graphics that made light of police brutality. Heres what the Sixth Circuit said while allowing failure to train claims to move forward: It is very troubling that the City of Euclid’s law-enforcement training included jokes about Rodney King—who was tased and beaten in one of the most infamous police encounters in history—and a cartoon with a message that twists the mission of police. The offensive statements and depictions in the training contradict the ethical duty of law enforcement officer “to serve the community; to safeguard lives and property; to protect the innocent against deception, the weak against oppression or intimidation and the peaceful against violence or disorder; and to respect the constitutional rights of all to liberty, equality, and justice.” Given this background, it comes as no surprise another Euclid officer is on the hook for violating rights en route to wrongfully killing another black resident of Euclid: A jury on Tuesday awarded $4.4 million to the family of a man shot and killed by a Euclid police officer in 2017. Officer Matthew Rhodes acted recklessly when he climbed into 23-year-old Luke Stewart’s car and shot him as Stewart drove away from a stop, an eight-member jury unanimously held after a trial sparked by a wrongful death lawsuit that Stewart’s mother filed. And heres how Officer Rhodes got here: by ignoring pretty much everything about good police work in hopes of lucking into something more than a guy sleeping in a car community caretaker interaction. Rhodes shot and killed Stewart about 7 a.m. March 13, 2017, after Rhodes and fellow Euclid officer Louis Catalani got called to the scene by a resident who reported that a car she didn’t recognize was parked on the street in front of her house. Stewart was asleep in the driver’s seat, and the officers said they saw items in the car that led them to suspect he may have been impaired. Rhodes and Catalani did not turn on their police cars’ red and blue lights or dashboard cameras during the encounter. The department did not provide officers with body cameras at the time so no video exists of the interaction between the officers and Stewart. Neither Rhodes nor Catalani identified himself as a police officer. They shined bright lights mounted on their police cars on Stewart’s car as they walked up to it. When faced with the facts, Officer Rhodes chose to lie. He claimed that he shot Stewart because he was afraid he was trying to drive them both into a telephone pole which would have sent both of them flying through the windshield. On cross-examination, he admitted the car was in neutral when he shot Stewart, a confession prompted by the familys attorneys, who pointed out the vehicle had traveled less than a quarter-mile in the 57 seconds it took for the officer to decide to end Stewarts life a distance that represented an average speed of 14 miles per hour. The jury here found the officer at fault. The grand jury presented with this case the kind of jury that will indict pretty much anyone for any reason somehow didnt find anything criminal about the officers actions. Its somewhat of a miracle this case made its way to a jury trial. This was a federal case originally. The district court awarded qualified immunity to Officer Rhodes, even though the facts were still in dispute, and the court was aware Rhodes had been, at best, inconsistent in his testimony. Heres one footnote to that effect: Officer Rhodes testified that he lost his Taser at this point, id., but that contradicts his statement to the [Ohio] BCI [Bureau of Criminal Investigations] It also pointed out that Rhodes did not activate his dash cam and belt mic in violation of PD policy, despite the fact that the dash camera could be activated from the belt microphone. And more lying: Officer Catalani also testified that there appeared to be drug residue on the scale. But he made no mention of any residue in his interview with agents from the Ohio Bureau of Criminal Investigation (BCI). And more: Defendant Officer Rhodes asserted, however, that the tinting on the Honda would have prevented the lights from blinding Stewart. But the Honda’s windows do not appear unusually dark in the BCI report photos. Theres more, but you get the point. Nevertheless, the district court said the lying officer had no reason to believe his actions were not reasonable under the circumstances (that he lied repeatedly about). The same conclusion was reached by the Sixth Circuit Court of Appeals the same court that found another Euclid PD officer so out of line it could not extend qualified immunity. In this case, the court had it doubts about Rhodes actions and claims but could not say that it was clearly established (burn in hell, qualified immunity) that his deadly force was unreasonable under established case law. (vomit emoji) However, it did do something useful: it said the state law claims under the Ohio Constitution were still valid. It sent the case back down to the lower court. And thats why Officer Rhodes is now on the hook for $4.4 million in wrongful death damages. Of course, this award is more likely to come down than go up after the inevitable appeal, but it should send a clear message to both the city and the officers it employs that rights violations are not just police use-of-force punchlines. And it should also make it clear that not every jury is willing to excuse any actions taken by someone wearing a badge and a uniform. Euclid needs to do some deep-cleaning. Its police department is more trouble than its worth.

[Category: 1, 6th circuit, euclid, louis catalani, luke stewart, matthew rhodes, ohio, pretextual stop, qualified immunity]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/22/22 4:38pm
After posting the following AI-generated images, I got private replies asking the same question: “Can you tell me how you made these?” So, here I will provide the background and “how to” of creating such AI portraits, but also describe the ethical considerations and the dangers we should address right now. Astria AI images of Nirit Weiss-Blatt Background Generative AI – as opposed to analytical artificial intelligence – can create novel content. It not only analyzes existing datasets but it generates whole new images, text, audio, videos, and code. Sequoia’s Generative-AI Market Map/Application Landscape, from Sonya Huang’s tweet As the ability to generate original images based on written text emerged, it became the hottest hype in tech. It all began with the release of DALL-E 2, an improved AI art program from OpenAI. It allowed users to input text descriptions and get images that looked amazing, adorable, or weird as hell. DALL-E 2 image results Then, people start hearing about Midjourney (and its vibrant Discord) and Stable Diffusion, an open-source project. (Google’s Imagen and Meta’s image generator are not released to the public). Stable Diffusion allowed engineers to train the model on any image dataset to churn out any style of art. Due to the rapid development of the coding community, more specialized generators were introduced, including new killer apps to create AI-generated art from YOUR pictures: Avatar AI, ProfilePicture.AI, and Astria AI. With them, you can create your own AI-generated avatars or profile pictures. You can change some of your features, as demonstrated by Andrew “Boz” Bosworth, Meta CTO, who used AvatarAI to see himself with hair: Screenshot from Andrew “Boz” Bosworths Twitter account Startups like the ones listed above are booming: The founders of AvatarAI and ProfilePicture.AI tweet about their sales and growth In order to use their tools, you need to follow these steps: 1. How to prepare your photos for the AI training As of now, training Astria AI with your photos costs $10. Every app charges differently for fine-tuning credits (e.g., ProfilePicture AI costs $24, and Avatar AI costs $40). Please note that those charges change quickly as they experiment with their business model. Here are a few ways to improve the training process: At least 20 pictures, preferably shot or cropped to a 1:1 (square) aspect ratio. At least 10 face close-ups, 5 medium from the chest up, 3 full body. Variation in background, lighting, expressions, and eyes looking in different directions. No glasses/sunglasses. No other people in the pictures. Examples from my set of pictures Approximately 60 minutes after uploading your pictures, a trained AI model will be ready. Where will you probably need the most guidance? Prompting. 2. How to survive the prompting mess After the training is complete, a few images will be waiting for you on your page. Those are “default prompts” as examples of the app’s capabilities. To create your own prompts, set the className as “person” (this was recommended by Astria AI). Formulating the right prompts for your purpose can take a lot of time. You’ll need patience (and motivation) to keep refining the prompts. But when a text prompt comes to life as you envisioned (or better than you envisioned), it feels a bit like magic. To get creative inspiration, I used two search engines, Lexica and Krea. You can search for keywords, scroll until you find an image style you like, and copy the prompt (then change the text to “sks person” to make it your self-portrait). Screenshot from Lexica Some prompts are so long that reading them is painful. They usually include the images setting (e.g., “highly detailed realistic portrait”) and style (“art by” one of the popular artists). As regular people need help crafting those words, we already have an entirely new role for artists under prompt engineering. It’s going to be a desirable skill. Just bear in mind that no matter how professional your prompts are, some results will look WILD. In one image, I had 3 arms (don’t ask me why). If you wish to avoid the whole prompts chaos, I have a friend who just used the default ones, was delighted with the results, and shared them everywhere. For those apps to be more popular, I recommend including more default prompts. Potentials and Advantages 1. It’s NOT the END of human creativity The electronic synthesizer did not kill music, and photography did not kill painting. Instead, they catalyzed new forms of art. AI art is here to stay and can make creators more productive. Creators are going to include such models as part of their creative process. It’s a partnership: AI can serve as a starting point, a sketch tool that provides suggestions, and the creator will improve it further. 2. The path to the masses Thus far, Crypto boosters didn’t answer the simple question of “what is it good for?” and have failed to articulate concrete, compelling use cases for Web3. All we got was needless complexity, vague future-casting, and “cryptocountries.” On the contrary, AI-generated art has a clear utility for creative industries. It’s already used in various industries, such as advertising, marketing, gaming, architecture, fashion, graphic design, and product design. This Twitter thread provides a variety of use cases, from commerce to the medical imaging domain. When it comes to AI portraits, I’m thinking of another target audience: teenagers. Why? Because they already spend hours perfecting their pictures with various filters. Make image-generating tools inexpensive and easy to use, and theyll be your heaviest users. Hopefully, they wont use it in their dating profiles. Downsides and Disadvantages 1. Copying by AI was not consented to by the artists Despite the booming industry, there’s a lack of compensation for artists. Read about their frustration, for example, in how one unwilling illustrator found herself turned into an AI model. Spoiler alert: She didn’t like being turned into a popular prompt for people to mimic, and now thousands of people (soon to be millions) can copy her style of work almost exactly. Copying artists is a copyright nightmare. The input question is: can you use copyright-protected data to train AI models? The output question is: can you copyright what an AI model creates? Nobody knows the answers, and it’s only the beginning of this debate. 2. This technology can be easily weaponized A year ago on Techdirt, I summed up the narratives around Facebook: (1) Amplifying the good/bad or a mirror for the ugly, (2) The algorithms fault vs. the people who build them or use them, (3) Fixing the machine vs. the underlying societal problems. I believe this discussion also applies to AI-generated art. It should be viewed through the same lens: good, bad, and ugly. Though this technology is delightful and beneficial, there are also negative ramifications of releasing image-manipulation tools and letting humanity play with them. While DALL-E had a few restrictions, the new competitors had a “hands-off” approach and no safeguards to prevent people from creating sexual or potentially violent and abusive content. Soon after, a subset of users generated deepfake-style images of nude celebrities. (Look surprised). Google’s Dreambooth (which AI-generated avatar tools use) made making deepfakes even easier. As part of my exploration of the new tools, I also tried Deviant Art’s DreamUp. Its “most recent creations” page displayed various images depicting naked teenage girls. It was disturbing and sickening. In one digital artwork of a teen girl in the snow, the artist commented: “This one is closer to what I was envisioning, apart from being naked. Why DreamUp? Clearly, I need to state ‘clothes’ in my prompt.” That says it all. According to the new book Data Science in Context: Foundations, Challenges, Opportunities, machine learning advances have made deepfakes more realistic but also enhanced our ability to detect deepfakes, leading to a “cat-and-mouse game.” In almost every form of technology, there are bad actors playing this cat-and-mouse game. Managing user-generated content online is a headache that social media companies know all too well. Elon Musk’s first two weeks at Twitter magnified that experience — “he courted chaos and found it.” Stability AI released an open-source tool with a belief in radical freedom, courted chaos, and found it in AI-generated porn and CSAM. Text-to-video isnt very realistic now, but with the pace at which AI models are developing, it will be in a few months. In a world of synthetic media, seeing will no longer be believing, and the basic unit of visual truth will no longer be credible. The authenticity of every video will be in question. Overall, it will become increasingly difficult to determine whether a piece of text, audio, or video is human-generated or not. It could have a profound impact on trust in online media. The danger is that with the new persuasive visuals, propaganda could be taken to a whole new level. Meanwhile, deepfake detectors are making progress. The arms race is on. AI-generated art inspires creativity, and enthusiasm as a result. But as it approaches mass consumption, we can also see the dark side. A revolution of this magnitude can have many consequences, some of which can be downright terrifying. Guardrails are needed now.

[Category: 1, ai art, generative ai, portraits]

[*] [+] [-] [x] [A+] [a-]  
[l] at 11/22/22 2:37pm
Back in 2014, New York City officials decided they would replace the city’s dated pay phones with “information kiosks” providing free public Wi-Fi, phone calls, device charging, and a tablet for access to city services, maps and directions. The kiosks were to be funded by “context-aware” ads based on a variety of data collected from kiosk users and NYC residents just passing by. It didnt go well. Within a few years, reports began to emerge that the company hired to deploy the kiosks (CityBridge) had only deployed 1900 of an originally promised 7,000 kiosks. And the kiosks they had deployed were being used to watch porn. The program has also been long criticized for over-collecting user data and being completely non-transparent about what data was being collected. By 2020 CityBridge still owed the city $75 million. Last year, an audit by New York State’s Comptroller found LinkNYC failed completely to meet its deployment goals, failed to adequately maintain existing kiosks, failed to turn on many already deployed kiosks, and had fallen well short of projected ad revenues. What did the city do? It doubled down. New York Mayor Eric Adams not only killed off a more promising plan to build a city open access fiber network to boost competition, he decided to expand the LinkNYC project. Thats involved deploying entirely new, ugly, and even larger kiosks embedded with 5G small cells. City residents, so far, arent particularly enthused about the eyesores: Some residents are calling them eyesores. Others are worried about safety due to their placement. “No one asked us about the design or where they should go. We have notes,” said a Brooklyn neighborhood group on Facebook. While the kiosks still provide free and useful services to those unafraid of sticky surfaces, theyre a particular boon to wireless carriers looking to expand their 5G network reach using small cells, something they would have likely done anyway (usually using existing buildings and city light fixtures). Users can still access free Wi-Fi at the kiosks, but youll obviously need a paid 5G subscription to actually access the 5G component of the towers. The problem, again, is that the kiosks dont actually address the problem at the heart of the digital divide: duopoly/monopoly telecom power that has constrained city competition, resulting in high prices for home access.  Two-in-five New York City households lack either a home broadband connection or cell service. More than 1.5 million New Yorkers lack both. Usually, high service costs are the biggest obstacle. The kiosks are a nice perk, but theyre not actually addressing the regional monopoly problem. And guys like Adams dont want to upset monopoly power because it means upsetting companies that arent just politically powerful, but are bone-grafted to our intelligence gathering and first responder networks, effectively making them a part of government and beyond meaningful accountability. Its the same story that plays out nationwide. Theres just an unlimited number of half-measures professing to bridge the digital divide that are, in reality, just band-aids. Band-aids that usually involve throwing additional subsidies at the very same monopolies responsible for driving meaningful competition out of your town, city, or state over the last thirty-five years. Truly fixing the U.S. digital divide means policies that drive open access fiber networks and new, local competitors into the backyards of entrenched monopolies (see our recent report on this very subject). New York City had a real opportunity to do this with the open access network component included in its original NYC Internet Master plan, but instead took the familiar path of half-cooked efforts that give politicians something to crow about, but dont actually solve (or usually even acknowledge!) the underlying problem of monopoly control.

[Category: 1, 5g, broadband, digital divide, high speed internet, linknyc, monopolies, new york city, regulatory capture, wireless]

As of 11/29/22 3:14am. Last new 11/28/22 9:57pm.

Next feed in category: Arc Technica Science