×
Social Networks

Threads Launches In the European Union (macrumors.com) 27

Meta CEO Mark Zuckerberg announced that Threads is now available to users in the European Union. "Today we're opening Threads to more countries in Europe," wrote Zuckerberg in a post on the platform. "Welcome everyone." MacRumors reports: The move comes five months after the social media network launched in most markets around the world, but remained unavailable to EU-based users due to regulatory hurdles. [...] In addition to creating a Threads profile for posting, users in the EU can also simply browse Threads without having an Instagram account, an option likely introduced to comply with legislation surrounding online services.

The expansion into a market of 448 million people should see Threads' user numbers get a decent boost. Meta CEO Mark Zuckerberg said on a company earnings call in October that Threads now has "just under" 100 million monthly users. Since its launch earlier this year it has gained a web app, an ability to search for posts, and a post editing feature.

Businesses

FTC is Investigating Adobe Over Its Rules for Canceling Software Subscriptions (fortune.com) 18

Adobe said US regulators are probing the company's cancellation rules for software subscriptions, an issue that has long been a source of ire for customers. From a report: The company has been cooperating with the Federal Trade Commission on a civil investigation of the issue since June 2022, Adobe said Wednesday in a filing. A settlement could involve "significant monetary costs or penalties," the company said.

Users of Adobe programs including Photoshop and Premiere have long complained about the expense of canceling a subscription, which can cost more than $700 annually for individuals. Subscribers must cancel within two weeks of buying a subscription to receive a full refund; otherwise, they incur a prorated penalty. Some other digital services such as Spotify and Netflix don't charge a cancellation fee. Digital subscriptions have been a recent focus for the FTC. It proposed a rule in March that consumers must be able to cancel subscriptions as easily as they sign up for them.

"Too often, companies make it difficult to unsubscribe from a service, wasting Americans' time and money on things they may not want or need," President Joe Biden said in a social media post at the time. Adobe said the FTC alerted the company in November that commission staff say "they had the authority to enter into consent negotiations to determine if a settlement regarding their investigation of these issues could be reached. We believe our practices comply with the law and are currently engaging in discussion with FTC staff."

Youtube

More Than 15% of Teens Say They're On YouTube or TikTok 'Almost Constantly' (cnbc.com) 70

Nearly 1 in 5 teenagers in the U.S. say they use YouTube and TikTok "almost constantly," according to a Pew Research Center survey. CNBC reports: The survey showed that YouTube was the most "widely used platform" for U.S.-based teenagers, with 93% of survey respondents saying they regularly use Google's video-streaming service. Of that 93% figure, about 16% of the teenage respondents said they "almost constantly visit or use" YouTube, underscoring the video app's immense popularity with the youth market. TikTok was the second-most popular app, with 63% of teens saying they use the ByteDance-owned short-video service, followed by Snapchat and Meta's Instagram, which had 60% and 59%, respectively. About 17% of the 63% of respondents who said they use TikTok indicated they access the short-video service "almost constantly," the report noted.

Meanwhile, Facebook and Twitter, now known as X, are not as popular with U.S.-based teenagers as they were a decade ago, the Pew Research study detailed. Regarding Facebook in particular, the Pew Research authors wrote that the share of teens who use the Meta-owned social media app "has dropped from 71% in 2014-2015 to 33% today." During the same period, Meta-owned Instagram's usage has not made up the difference in share, increasing from 52% in 2014-15 to a peak of 62% last year, then dropping to 59% in 2023, according to the firm.

AI

MIT Group Releases White Papers On Governance of AI (mit.edu) 46

An anonymous reader quotes a report from MIT News: Providing a resource for U.S. policymakers, a committee of MIT leaders and scholars has released a set of policy briefs that outlines a framework for the governance of artificial intelligence. The approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI. The aim of the papers is to help enhance U.S. leadership in the area of artificial intelligence broadly, while limiting harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.

The main policy paper, "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications. "As a country we're already regulating a lot of relatively high-risk things and providing governance there," says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. "We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach." [...]

"The framework we put together gives a concrete way of thinking about these things," says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT's Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort. The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more.
These are the key policies and approaches mentioned in the white papers:

Extension of Current Regulatory and Liability Approaches: The framework proposes extending current regulatory and liability approaches to cover AI. It suggests leveraging existing U.S. government entities that oversee relevant domains for regulating AI tools. This is seen as a practical approach, starting with areas where human activity is already being regulated and deemed high risk.

Identification of Purpose and Intent of AI Tools: The framework emphasizes the importance of AI providers defining the purpose and intent of AI applications in advance. This identification process would enable the application of relevant regulations based on the specific purpose of AI tools.

Responsibility and Accountability: The policy brief underscores the responsibility of AI providers to clearly define the purpose and intent of their tools. It also suggests establishing guardrails to prevent misuse and determining the extent of accountability for specific problems. The framework aims to identify situations where end users could reasonably be held responsible for the consequences of misusing AI tools.

Advances in Auditing of AI Tools: The policy brief calls for advances in auditing new AI tools, whether initiated by the government, user-driven, or arising from legal liability proceedings. Public standards for auditing are recommended, potentially established by a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).

Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.

Encouragement of Research for Societal Benefit: The policy papers highlight the importance of encouraging research on how to make AI beneficial to society. For instance, there is a focus on exploring the possibility of AI augmenting and aiding workers rather than replacing them, leading to long-term economic growth distributed throughout society.

Addressing Legal Issues Specific to AI: The framework acknowledges the need to address specific legal matters related to AI, including copyright and intellectual property issues. Special consideration is also mentioned for "human plus" legal issues, where AI capabilities go beyond human capacities, such as mass surveillance tools.

Broadening Perspectives in Policymaking: The ad hoc committee emphasizes the need for a broad range of disciplinary perspectives in policymaking, advocating for academic institutions to play a role in addressing the interplay between technology and society. The goal is to govern AI effectively by considering both technical and social systems.
Security

US Healthcare Giant Norton Says Hackers Stole Millions of Patients' Data During Ransomware Attack (techcrunch.com) 27

An anonymous reader quotes a report from TechCrunch: Kentucky-based nonprofit healthcare system Norton Healthcare has confirmed that hackers accessed the personal data of millions of patients and employees during an earlier ransomware attack. Norton operates more than 40 clinics and hospitals in and around Louisville, Kentucky, and is the city's third-largest private employer. The organization has more than 20,000 employees, and more than 3,000 total providers on its medical staff, according to its website. In a filing with Maine's attorney general on Friday, Norton said that the sensitive data of approximately 2.5 million patients, as well as employees and their dependents, was accessed during its May ransomware attack.

In a letter sent to those affected, the nonprofit said that hackers had access to "certain network storage devices between May 7 and May 9," but did not access Norton Healthcare's medical record system or Norton MyChart, its electronic medical record system. But Norton admitted that following a "time-consuming" internal investigation, which the organization completed in November, Norton found that hackers accessed a "wide range of sensitive information," including names, dates of birth, Social Security numbers, health and insurance information and medical identification numbers. Norton Healthcare says that, for some individuals, the exposed data may have also included financial account numbers, driver licenses or other government ID numbers, as well as digital signatures. It's not known if any of the accessed data was encrypted.

Norton says it notified law enforcement about the attack and confirmed it did not pay any ransom payment. The organization did not name the hackers responsible for the cyberattack, but the incident was claimed by the notorious ALPHV/BlackCat ransomware gang in May, according to data breach news site DataBreaches.net, which reported that the group claimed it exfiltrated almost five terabytes of data. TechCrunch could not confirm this, as the ALPHV website was inaccessible at the time of writing.

Privacy

Republican Presidential Candidates Debate Anonymity on Social Media (cnbc.com) 174

Four Republican candidates for U.S. president debated Wednesday — and moderator Megyn Kelly had a tough question for former South Carolina governor Nikki Haley. "Can you please speak to the requirement that you said that every anonymous internet user needs to out themselves?" Nikki Haley: What I said was, that social media companies need to show us their algorithms. I also said there are millions of bots on social media right now. They're foreign, they're Chinese, they're Iranian. I will always fight for freedom of speech for Americans; we do not need freedom of speech for Russians and Iranians and Hamas. We need social media companies to go and fight back on all of these bots that are happening. That's what I said.

As a mom, do I think social media would be more civil if we went and had people's names next to that? Yes, I do think that, because I think we've got too much cyberbullying, I think we've got child pornography and all of those things. But having said that, I never said government should go and require anyone's name.

DeSantis: That's false.

Haley: What I said —

DeSantis:You said I want your name. As president of the United States, her first day in office, she said one of the first things I'm going to do --

Haley: I said we were going to get the millions of bots.

DeSantis: "All social medias? I want your name." A government i.d. to dox every American. That's what she said. You can roll the tape. She said I want your name — and that was going to be one of the first things she did in office. And then she got real serious blowback — and understandably so, because it would be a massive expansion of government. We have anonymous speech. The Federalist Papers were written with anonymous writers — Jay, Madison, and Hamilton, they went under "Publius". It's something that's important — and especially given how conservatives have been attacked and they've lost jobs and they've been cancelled. You know the regime would use that to weaponize that against our own people. It was a bad idea, and she should own up to it.

Haley: This cracks me up, because Ron is so hypocritical, because he actually went and tried to push a law that would stop anonymous people from talking to the press, and went so far to say bloggers should have to register with the state --

DeSantis:That's not true.

Haley: — if they're going to write about elected officials. It was in the — check your newpaper. It was absolutely there.

DeSantis quickly attributed the introduction of that legislation to "some legislator".

The press had already extensively written about Haley's position on anonymity on social media. Three weeks ago Business Insider covered a Fox News interview, and quoted Nikki Haley as saying: "When I get into office, the first thing we have to do, social media companies, they have to show America their algorithms. Let us see why they're pushing what they're pushing. The second thing is every person on social media should be verified by their name." Haley said this was why her proposals would be necessary to counter the "national security threat" posed by anonymous social media accounts and social media bots. "When you do that, all of a sudden people have to stand by what they say, and it gets rid of the Russian bots, the Iranian bots, and the Chinese bots," Haley said. "And then you're gonna get some civility when people know their name is next to what they say, and they know their pastor and their family member's gonna see it. It's gonna help our kids and it's gonna help our country," she continued... A representative for the Haley campaign told Business Insider that Haley's proposals were "common sense."

"We all know that America's enemies use anonymous bots to spread anti-American lies and sow chaos and division within our borders. Nikki believes social media companies need to do a better job of verifying users so we can crack down on Chinese, Iranian, and Russian bots," the representative said.

The next day CNBC reported that Haley "appeared to add a caveat... suggesting Wednesday that Americans should still be allowed to post anonymously online." A spokesperson for Haley's campaign added, "Social media companies need to do a better job of verifying users as human in order to crack down on anonymous foreign bots. We can do this while protecting America's right to free speech and Americans who post anonymously."

Privacy issues had also come up just five minutes earlier in the debate. In March America's Treasury Secretary had recommended the country "advance policy and technical work on a potential central bank digital currency, or CBDC, so the U.S. is prepared if CBDC is determined to be in the national interest."

But Florida governor Ron DeSantis spoke out forecefully against the possibility. "They want to get rid of cash, crypto, they want to force you to do that. They'll take away your privacy. They will absolutely regulate your purchases. On Day One as president, we take the idea of Central Bank Digital Currency, and we throw it in the trash can. It'll be dead on arrival." [The audience applauded.]
Education

Harvard Accused of Bowing to Meta By Ousted Disinformation Scholar in Whistleblower Complaint (cjr.org) 148

The Washington Post reports: A prominent disinformation scholar has accused Harvard University of dismissing her to curry favor with Facebook and its current and former executives in violation of her right to free speech.

Joan Donovan claimed in a filing with the Education Department and the Massachusetts attorney general that her superiors soured on her as Harvard was getting a record $500 million pledge from Meta founder Mark Zuckerberg's charitable arm. As research director of Harvard Kennedy School projects delving into mis- and disinformation on social media platforms, Donovan had raised millions in grants, testified before Congress and been a frequent commentator on television, often faulting internet companies for profiting from the spread of divisive falsehoods. Last year, the school's dean told her that he was winding down her main project and that she should stop fundraising for it. This year, the school eliminated her position.

As one of the first researchers with access to "the Facebook papers" leaked by Frances Haugen, Donovan was asked to speak at a meeting of the Dean's Council, a group of the university's high-profile donors, remembers The Columbia Journalism Review : Elliot Schrage, then the vice president of communications and global policy for Meta, was also at the meeting. Donovan says that, after she brought up the Haugen leaks, Schrage became agitated and visibly angry, "rocking in his chair and waving his arms and trying to interrupt." During a Q&A session after her talk, Donovan says, Schrage reiterated a number of common Meta talking points, including the fact that disinformation is a fluid concept with no agreed-upon definition and that the company didn't want to be an "arbiter of truth."

According to Donovan, Nancy Gibbs, Donovan's faculty advisor, was supportive after the incident. She says that they discussed how Schrage would likely try to pressure Douglas Elmendorf, the dean of the Kennedy School of Government (where the Shorenstein Center hosting Donovan's project is based) about the idea of creating a public archive of the documents... After Elmendorf called her in for a status meeting, Donovan claims that he told her she was not to raise any more money for her project; that she was forbidden to spend the money that she had raised (a total of twelve million dollars, she says); and that she couldn't hire any new staff. According to Donovan, Elmendorf told her that he wasn't going to allow any expenditure that increased her public profile, and used a number of Meta talking points in his assessment of her work...

Donovan says she tried to move her work to the Berkman Klein Center at Harvard, but that the head of that center told her that they didn't have the "political capital" to bring on someone whom Elmendorf had "targeted"... Donovan told me that she believes the pressure to shut down her project is part of a broader pattern of influence in which Meta and other tech platforms have tried to make research into disinformation as difficult as possible... Donovan said she hopes that by blowing the whistle on Harvard, her case will be the "tip of the spear."

Another interesting detail from the article: [Donovan] alleges that Meta pressured Elmendorf to act, noting that he is friends with Sheryl Sandberg, the company's chief operating officer. (Elmendorf was Sandberg's advisor when she studied at Harvard in the early nineties; he attended Sandberg's wedding in 2022, four days before moving to shut down Donovan's project.)
Social Networks

Reactions Continue to Viral Video that Led to Calls for College Presidents to Resign 414

After billionaire Bill Ackman demanded three college presidents "resign in disgrace," that post on X — excerpting their testimony before a U.S. Congressional committee — has now been viewed more than 104 million times, provoking a variety of reactions.

Saturday afternoon, one of the three college presidents resigned — University of Pennsylvania president Liz Magill.

Politico reports that the Republican-led Committee now "will be investigating Harvard University, MIT and the University of Pennsylvania after their institutions' leaders failed to sufficiently condemn student protests calling for 'Jewish genocide.'" The BBC reports a wealthy UPenn donor reportedly withdrew a stock grant worth $100 million.

But after watching the entire Congressional hearing, New York Times opinion columnist Michelle Goldberg wrote that she'd seen a "more understandable" context: In the questioning before the now-infamous exchange, you can see the trap [Congresswoman Elise] Stefanik laid. "You understand that the use of the term 'intifada' in the context of the Israeli-Arab conflict is indeed a call for violent armed resistance against the state of Israel, including violence against civilians and the genocide of Jews. Are you aware of that?" she asked Claudine Gay of Harvard. Gay responded that such language was "abhorrent."

Stefanik then badgered her to admit that students chanting about intifada were calling for genocide, and asked angrily whether that was against Harvard's code of conduct. "Will admissions offers be rescinded or any disciplinary action be taken against students or applicants who say, 'From the river to the sea' or 'intifada,' advocating for the murder of Jews?" Gay repeated that such "hateful, reckless, offensive speech is personally abhorrent to me," but said action would be taken only "when speech crosses into conduct." So later in the hearing, when Stefanik again started questioning Gay, Kornbluth and Magill about whether it was permissible for students to call for the genocide of the Jews, she was referring, it seemed clear, to common pro-Palestinian rhetoric and trying to get the university presidents to commit to disciplining those who use it. Doing so would be an egregious violation of free speech. After all, even if you're disgusted by slogans like "From the river to the sea, Palestine will be free," their meaning is contested...

Liberal blogger Josh Marshall argues that "While groups like Hamas certainly use the word [intifada] with a strong eliminationist meaning it is simply not the case that the term consistently or usually or mostly refers to genocide. It's just not. Stefanik's basic equation was and is simply false and the university presidents were maladroit enough to fall into her trap."

The Wall Street Journal published an investigation the day after the hearing. A political science professor at the University of California, Berkeley hired a survey firm to poll 250 students across the U.S. from "a variety of backgrounds" — and the results were surprising: A Latino engineering student from a southern university reported "definitely" supporting "from the river to the sea" because "Palestinians and Israelis should live in two separate countries, side by side." Shown on a map of the region that a Palestinian state would stretch from the Jordan River to the Mediterranean Sea, leaving no room for Israel, he downgraded his enthusiasm for the mantra to "probably not." Of the 80 students who saw the map, 75% similarly changed their view... In all, after learning a handful of basic facts about the Middle East, 67.8% of students went from supporting "from the river to the sea" to rejecting the mantra. These students had never seen a map of the Mideast and knew little about the region's geography, history, or demography.
More about the phrase from the Associated Press: Many Palestinian activists say it's a call for peace and equality after 75 years of Israeli statehood and decades-long, open-ended Israeli military rule over millions of Palestinians. Jews hear a clear demand for Israel's destruction... By 2012, it was clear that Hamas had claimed the slogan in its drive to claim land spanning Israel, the Gaza Strip and the West Bank... The phrase also has roots in the Hamas charter... [Since 1997 the U.S. government has considered Hamas a terrorist organization.]

"A Palestine between the river to the sea leaves not a single inch for Israel," read an open letter signed by 30 Jewish news outlets around the world and released on Wednesday... Last month, Vienna police banned a pro-Palestinian demonstration, citing the fact that the phrase "from the river to the sea" was mentioned in invitations and characterizing it as a call to violence. And in Britain, the Labour party issued a temporary punishment to a member of Parliament, Andy McDonald, for using the phrase during a rally at which he called for a stop to bombardment.

As the controversy rages on, Ackman's X timeline now includes an official response reposted from a college that wasn't called to testify — Stanford University: In the context of the national discourse, Stanford unequivocally condemns calls for the genocide of Jews or any peoples. That statement would clearly violate Stanford's Fundamental Standard, the code of conduct for all students at the university.
Ackman also retweeted this response from OpenAI CEO Sam Altman: for a long time i said that antisemitism, particularly on the american left, was not as bad as people claimed. i'd like to just state that i was totally wrong. i still don't understand it, really. or know what to do about it. but it is so fucked.
Wednesday UPenn's president announced they'd immediately consider a new change in policy," in an X post viewed 38.7 million times: For decades under multiple Penn presidents and consistent with most universities, Penn's policies have been guided by the [U.S.] Constitution and the law. In today's world, where we are seeing signs of hate proliferating across our campus and our world in a way not seen in years, these policies need to be clarified and evaluated. Penn must initiate a serious and careful look at our policies, and provost Jackson and I will immediately convene a process to do so. As president, I'm committed to a safe, secure, and supportive environment so all members of our community can thrive. We can and we will get this right. Thank you.
The next day the university's business school called on Magill to resign. And Saturday afternoon, Magill resigned.
Businesses

Before Sam Altman's Ouster, OpenAI's Leaders Were Warned of Abusive Behavior (msn.com) 64

"This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman," the Washington Post reported late Friday: Altman — a revered mentor, and avatar of the AI revolution — had been psychologically abusive, the employees alleged, creating pockets of chaos and delays at the artificial-intelligence start-up, according to two people familiar with the board's thinking who spoke on the condition of anonymity to discuss sensitive internal matters. The company leaders, a group that included key figures and people who manage large teams, mentioned Altman's allegedly pitting employees against each other in unhealthy ways, the people said.

Although the board members didn't use the language of abuse to describe Altman's behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the board's ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The new complaints triggered a review of Altman's conduct during which the board weighed the devotion Altman had cultivated among factions of the company against the risk that OpenAI could lose key leaders who found interacting with him highly toxic. They also considered reports from several employees who said they feared retaliation from Altman: One told the board that Altman was hostile after the employee shared critical feedback with the CEO and that he undermined the employee on that person's team, the people said...

The complaints about Altman's alleged behavior, which have not previously been reported, were a major factor in the board's abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altman's firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

Bloomberg reported Friday: The board had heard from some senior executives at OpenAI who had issues with Altman, said one person familiar with directors' thinking. But employees approached board members warily because they were scared of potential repercussions of Altman finding out they had spoken out against him, the person said.
Two other interesting details from the Post's article:
  • While over 95% of the company's employees signed an open letter after Altman's firing demanding his return, "On social media, in news reports and on the anonymous app Blind, which requires members to sign up with a work email address to post, people identified as current OpenAI employees also described facing intense peer pressure to sign the mass-resignation letter."
  • The Post also spotted "a cryptic post" on X Wednesday from OpenAI co-founder and chief scientist Ilya Sutskever about lessons learned over the past month: "One such lesson is that the phrase 'the beatings will continue until morale improves' applies more often than it has any right to,'" (The Post adds that "The tweet was quickly deleted.")

The Post also reported in November that "Before OpenAI, Altman was asked to leave by his mentor at the prominent start-up incubator Y Combinator, part of a pattern of clashes that some attribute to his self-serving approach."


Social Networks

Threads Adds Hashtags Ahead of EU Launch (9to5google.com) 11

Ahead of its December 14th launch in the European Union, Meta's Twitter-like social media platform, Threads, is adding a simplified version of hashtags to help users find related posts. 9to5Google reports: Announced in a post on Threads today, Meta is adding "Tags" to the social platform as a way to categorize a post and have it show up alongside other posts on the same topic. Tags work similarly to hashtags in the sense that they group together content, but they also work differently. Unlike hashtags, you can only have one tag/topic on a post. So, where many platforms (including Instagram) suffer somewhat from posts being flooded with dozens of hashtags appended to the bottom, Threads seemingly avoids that entirely. Meta says that this "makes it easier for others who care about that topic to find and read your post."

The other big difference with tags is how they appear in posts. Tags can be added by typing the # symbol in line with the text, but they don't appear with the symbol in the published post. Instead, they appear in blue text in the post, much like a traditional hyperlink. You can also add a tag by tapping the "#" symbol on the new post UI.
As for the EU launch, Meta has opted to "sneakily update the Threads website with an untitled countdown timer (which won't be viewable in countries where Threads is already available) with just under six days remaining on the clock," reports The Verge. "European Instagram users can also search for the term 'ticket' within the app to discover a digital invitation to Threads, alongside a scannable QR code and a launch time -- which may vary depending on the country in which the user is based."

"The delay in Threads' rollout to the EU has been caused by what Meta spokesperson Christine Pai described as 'upcoming regulatory uncertainty,' likely in reference to strict rules under the bloc's Digital Markets Act (DMA)."
Businesses

Amazon Says Thieves Swiped Millions by Faking Product Refunds (bloomberg.com) 26

Amazon sued what it called an international ring of thieves who swiped millions of dollars in merchandise from the company through a series of refund scams that included buying products on Amazon and seeking refunds without returning the goods. From a report: An organization called REKK advertised its refund services on social media sites, including Reddit and Discord, and communicated with perpetrators on the messaging app Telegram, Amazon said in a lawsuit filed Thursday in US District Court in the state of Washington.

The lawsuit names REKK and nearly 30 people from the US, Canada, UK, Greece, Lithuania and the Netherlands as defendants in the scheme, which involved hacking into Amazon's internal systems and bribing Amazon employees to approve reimbursements. REKK charged customers, who wanted to get pricey items like MacBook Pro laptops and car tires without paying for them, a commission based on the value of the purchase. "The defendants' scheme tricks Amazon into processing refunds for products that are never returned; instead of returning the products as promised, defendants keep the product and the refund," Amazon said in its lawsuit.

Social Networks

Actors Recorded Videos for 'Vladimir.' It Turned Into Russian Propaganda. (wsj.com) 70

Internet propagandists aligned with Russia have duped at least seven Western celebrities, including Elijah Wood and Priscilla Presley, into recording short videos to support its online information war against Ukraine, according to new security research by Microsoft. From a report: The celebrities look like they were asked to offer words of encouragement -- apparently via the Cameo app -- to someone named "Vladimir" who appears to be struggling with substance abuse, Microsoft said. Instead, these messages were edited, sometimes dressed up with emojis, links and the logos of media outlets and then shared online by the Russia-aligned trolls, the company said.

The point was to give the appearance that the celebrities were confirming that Ukrainian President Volodymyr Zelensky was suffering from drug and alcohol problems, false claims that Russia has pushed in the past, according to Microsoft. Russia has denied engaging in disinformation campaigns. In one of the videos, a crudely edited message by Wood to someone named Vladimir references drugs and alcohol, saying: "I just want to make sure that you're getting help." Wood's video first surfaced in July, but since then Microsoft researchers have observed six other similar celebrity videos misused in the same way, including clips by "Breaking Bad" actor Dean Norris, John C. McGinley of "Scrubs," and Kate Flannery of "The Office," the company said.

Technology

How Tech Giants Use Money, Access To Steer Academic Research (washingtonpost.com) 19

Tech giants including Google and Facebook parent Meta have dramatically ramped up charitable giving to university campuses over the past several years -- giving them influence over academics studying such critical topics as artificial intelligence, social media and disinformation. From a report: Meta CEO Mark Zuckerberg alone has donated money to more than 100 university campuses, either through Meta or his personal philanthropy arm, according to new research by the Tech Transparency Project, a nonprofit watchdog group studying the technology industry. Other firms are helping fund academic centers, doling out grants to professors and sitting on advisory boards reserved for donors, researchers told The Post.

Silicon Valley's influence is most apparent among computer science professors at such top-tier schools as Berkeley, University of Toronto, Stanford and MIT. According to a 2021 paper by University of Toronto and Harvard researchers, most tenure-track professors in computer science at those schools whose funding sources could be determined had taken money from the technology industry, including nearly 6 of 10 scholars of AI. The proportion rose further in certain controversial subjects, the study found. Of 33 professors whose funding could be traced who wrote on AI ethics for the top journals Nature and Science, for example, all but one had taken grant money from the tech giants or had worked as their employees or contractors.

AI

Maybe We Already Have Runaway Machines 45

A new book argues that the invention of states and corporations has something to teach us about A.I. But perhaps it's the other way around. From a report: One of the things that make the machine of the capitalist state work is that some of its powers have been devolved upon other artificial agents -- corporations. Where [David] Runciman (a professor of politics at Cambridge) compares the state to a general A.I., one that exists to serve a variety of functions, corporations have been granted a limited range of autonomy in the form of what might be compared to a narrow A.I., one that exists to fulfill particular purposes that remain beyond the remit or the interests of the sovereign body.

Corporations can thus be set up in free pursuit of a variety of idiosyncratic human enterprises, but they, too, are robotic insofar as they transcend the constraints and the priorities of their human members. The failure mode of governments is to become "exploitative and corrupt," Runciman notes. The failure mode of corporations, as extensions of an independent civil society, is that "their independence undoes social stability by allowing those making the money to make their own rules."

There is only a "narrow corridor" -- a term Runciman borrows from the economists Daron Acemoglu and James A. Robinson -- in which the artificial agents balance each other out, and citizens get to enjoy the sense of control that emerges from an atmosphere of freedom and security. The ideal scenario is, in other words, a kludgy equilibrium.
Science

Remote Collaboration Fuses Fewer Breakthrough Ideas 42

Abstract of a paper:Theories of innovation emphasize the role of social networks and teams as facilitators of breakthrough discoveries. Around the world, scientists and inventors are more plentiful and interconnected today than ever before. However, although there are more people making discoveries, and more ideas that can be reconfigured in new ways, research suggests that new ideas are getting harder to find --contradicting recombinant growth theory. Here we shed light on this apparent puzzle. Analysing 20 million research articles and 4 million patent applications from across the globe over the past half-century, we begin by documenting the rise of remote collaboration across cities, underlining the growing interconnectedness of scientists and inventors globally.

We further show that across all fields, periods and team sizes, researchers in these remote teams are consistently less likely to make breakthrough discoveries relative to their on-site counterparts. Creating a dataset that allows us to explore the division of labour in knowledge production within teams and across space, we find that among distributed team members, collaboration centres on late-stage, technical tasks involving more codified knowledge. Yet they are less likely to join forces in conceptual tasks -- such as conceiving new ideas and designing research -- when knowledge is tacit. We conclude that despite striking improvements in digital technology in recent years, remote teams are less likely to integrate the knowledge of their members to produce new, disruptive ideas.
Social Networks

Twitch To Shut Down in Korea Over 'Prohibitively Expensive' Network Fees 44

Twitch, the popular video streaming service, plans to shut down its business in South Korea on February 27 after finding that operating in one of the world's largest esports markets is "prohibitively expensive." From a report: Twitch CEO Dan Clancy said the firm undertook a "significant effort" to reduce the network costs to operate in Korea, but ultimately the fees to operate in the East Asian nation was still 10 times more expensive than in most other countries. The ceasing of operations in Korea is a "unique situation," he wrote in a blog post.

South Korea's expensive internet fees have led to legal fights -- streaming giant Netflix unsuccessfully sued a local broadband supplier last year to avoid paying usage charges, but Seoul's court ruled that Netflix must contribute to the network costs enabling its half-billion-dollar Korean business. Twitch attempted to lower its network costs by experimenting with a peer-to-peer model and then downgrading the streaming quality to 720p video resolution, Clancy said. While these efforts helped the firm lower its network costs, it wasn't enough.
United Kingdom

The UK Tries, Once Again, To Age-Gate Pornography (theverge.com) 95

Jon Porter reports via The Verge: UK telecoms regulator Ofcom has laid out how porn sites could verify users' ages under the newly passed Online Safety Act. Although the law gives sites the choice of how they keep out underage users, the regulator is publishing a list of measures they'll be able to use to comply. These include having a bank or mobile network confirm that a user is at least 18 years old (with that user's consent) or asking a user to supply valid details for a credit card that's only available to people who are 18 and older. The regulator is consulting on these guidelines starting today and hopes to finalize its official guidance in roughly a year's time.

Ofcom lists six age verification methods in today's draft guidelines. As well as turning to banks, mobile networks, and credit cards, other suggested measures include asking users to upload photo ID like a driver's license or passport, or for sites to use "facial age estimation" technology to analyze a person's face to determine that they've turned 18. Simply asking a site visitor to declare that they're an adult won't be considered strict enough. Once the duties come into force, pornography sites will be able to choose from Ofcom's approaches or implement their own age verification measures so long as they're deemed to hit the "highly effective" bar demanded by the Online Safety Act. The regulator will work with larger sites directly and keep tabs on smaller sites by listening to complaints, monitoring media coverage, and working with frontline services. Noncompliance with the Online Safety Act can be punished with fines of up to [$22.7 million] or 10 percent of global revenue (whichever is higher).

The guidelines being announced today will eventually apply to pornography sites both big and small so long as the content has been "published or displayed on an online service by the provider of the service." In other words, they're designed for professionally made pornography, rather than the kinds of user-generated content found on sites like OnlyFans. That's a tricky distinction when the two kinds often sit together side by side on the largest tube sites. But Ofcom will be opening a consultation on rules for user-generated content, search engines, and social media sites in the new year, and Whitehead suggests that the both sets of rules will come into effect at around the same time.

Medicine

Research Finds That Renting Ages You Faster Than Smoking, Obesity 267

schwit1 shares a report from the New York Post: A landmark study out of the University of Adelaide and University of Essex has found that living in a private rental property accelerates the biological aging process by more than two weeks every year. The research found renting had worse effects on biological age than being unemployed (adding 1.4 weeks per year), obesity (adding 1 week per year), or being a former smoker (adding about 1.1 weeks). University of Adelaide Professor of Housing Research Emma Baker said private renting added "about two-and-a-half weeks of aging" per year to a person's biological clock, compared to those who own their homes.

"In fact, private rental is the really interesting thing here, because social renters, for some reason, don't seem to have that effect," Professor Baker told the ABC News Daily podcast. She said the security of social renting -- aka public housing -- and homeownership has compared to people living with an end-of-lease date on their calendars. "When you look at big studies of the Australian population, you see that the average rental lease is between six and 12 months," she said. "So even if you have your lease extended, you still are living in that slight state of kind of unknowingness, really not quite secure if your lease is actually going to be extended or not." "We think that that is one of the things that's contributing to loss of years, effectively."
Crime

YouTuber Who Deliberately Crashed Plane For Views Is Headed To Federal Prison (yahoo.com) 122

Trevor Jacob, a daredevil YouTuber who deliberately crashed a plane for views in a moneymaking scheme, has been sentenced to six months in federal prison. Jacob posted a video of himself in 2021 parachuting out of a plane that he claimed had malfunctioned. In reality, the aircraft was purposely abandoned and crashed into the Los Padres National Forest in Southern California. From a report: Jacob pleaded guilty to one felony count of destruction and concealment with the intent to obstruct a federal investigation on June 30. "It appears that (Jacob) exercised exceptionally poor judgment in committing this offense," prosecutors said in the release. "(Jacob) most likely committed this offense to generate social media and news coverage for himself and to obtain financial gain. Nevertheless, this type of 'daredevil' conduct cannot be tolerated."

Jacob received a sponsorship from a company and had agreed to promote the company's wallet in the YouTube video that he would post. [...] The release said Jacob lied to federal investigators when he filed a report that falsely indicated his plane lost full power approximately 35 minutes into the flight. He also lied to a Federal Aviation Administration aviation safety inspector when he said he had parachuted out of the plane when the airplane's engine had quit because he could not identify any safe landing options.

Movies

Rockstar Officially Unveils GTA 6 Trailer (ign.com) 78

Rockstar Games has officially revealed the trailer for Grand Theft Auto VI, which is coming in 2025. You can watch it on YouTube. IGN reports: The trailer gives us a look at the game's female protagonist, a first for the series. Her name is Lucia, and that she starts off in prison -- "bad luck, I guess," she quips. The trailer confirms, too, that it's set in Vice City with a large sign - not a huge surprise for those who've been following the series, but exciting nonetheless. In addition to lots and lots of quick shots of crime, we also get glimpses of TikToks and live-streams, hinting that social media will be a big part of this game.

It all takes place as Tom Petty's "Love Is a Long Road" plays in the background, appropriate for the many car-related crimes we see. And yes, true to the Florida setting, there are alligators in locations where they shouldn't be. It ends by showing us a little more of Lucia and a male character, seemingly both lovers and partners-in-crime. "The only way we're gonna get through this is by sticking together and being a team," Lucia says at one point. Fans have been talking about GTA 6 ever since GTA 5 was released in 2013, perhaps unsurprisingly as IGN deemed that one a "masterpiece" in our review.

Slashdot Top Deals