Cellphones

Denmark To Ban Mobile Phones In Schools and After-School Clubs (theguardian.com) 66

Denmark is set to ban mobile phones in schools and after-school clubs, following a government commission's recommendation that children under 13 should not have their own smartphones. The Guardian reports: The government said it would change existing legislation to force all folkeskole -- comprehensive primary and lower secondary schools -- to become phone-free, meaning that almost all children aged between seven and 16-17 will be required by law not to bring their phones into school. The announcement marks a U-turn by the government, which had previously refused to introduce such a law. It comes as governments across Europe are trying to impose tighter regulations on children's access to phones and social media.

The Danish wellbeing commission was set up by the prime minister, Mette Frederiksen, in 2023 to investigate growing dissatisfaction among children and young people. Its long-awaited report, published on Tuesday, raised the alarm over the digitisation of children and young people's lives and called for a better balance between digital and analogue life. Among its 35 recommendations was the need for government legislation banning phones from schools and after-school clubs.

The minister for children and education, Mattias Tesfaye, told Politiken: "There is a need to reclaim the school as an educational space, where there is room for reflection and where it is not an extension of the teenage bedroom." There will be scope for local authorities to make exceptions, including for children with special educational needs, but he said mobile phones and personal tablets "do not belong in school, neither during breaks nor during lessons." He said the government had started preparing a legislative amendment.

Power

A Major Blackout Hits Chile, Leaving Millions Without Power (apnews.com) 27

A massive blackout has hit Chile, leaving millions without power and disrupting transportation, businesses, and essential services across 14 of the country's 16 regions. The Associated Press reports: The National Electrical Coordinator, Chile's grid operator, said a disruption had occurred in a high-voltage transmission line that carries power from the Atacama Desert of northern Chile to the capital of Santiago in the country's central valley. It did not say what actually caused the disruption that pushed much of the country's power grid into shutdown, from the northernmost Chilean port of Arica to the southern Los Lagos agricultural region.

Interior Minister Carolina Toha said hospitals, prisons and government buildings were switching on backup generators to keep essential equipment operating. In a press conference, Toha urged the public to stay calm and said officials were racing to put the grid back in operation and restore electric service across the country of some 19 million people. "It's affecting the entire electrical system of the country," she said of the breakdown in the 500-kV backbone transmission line. Toha said if all areas didn't return to normal by sunset the government would take emergency measures to avert a crisis. [...]

Videos on social media from all over Chile, a long ribbon of a country stretching 4,300 kilometers (over 2,600 miles) along the southern Pacific coast, showed chaos at intersections with no functioning traffic lights, people having to use their mobile phones as torches in the underground metro and police dispatched to help evacuate office buildings.

AI

Most US Workers Avoid AI Chatbots Despite Productivity Benefits, PEW Finds (pewresearch.org) 100

Most American workers are not embracing AI chatbots in their jobs, with 55% rarely or never using these tools and 29% completely unfamiliar with them, according to a Pew Research Center survey released Tuesday.

Only 16% of workers report regular use of AI chatbots like ChatGPT, Gemini or Copilot. Adoption is highest among younger workers (23% of those aged 18-29) and those with post-graduate degrees (26%). Among users, research (57%), editing content (52%), and drafting reports (47%) top the list of applications. While 40% find chatbots extremely or very helpful for working faster, just 29% say they significantly improve work quality.

For the majority who don't use AI chatbots, 36% cite lack of relevance to their job as the primary reason. Employer attitudes remain largely neutral, with half neither encouraging nor discouraging usage. The technology sector leads in workplace adoption, with 36% of employers actively promoting chatbot use, followed by financial services (24%).
AI

Richard Dawkins Asks ChatGPT If It's Conscious (substack.com) 182

British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic?

"Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?

ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...

So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.

When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness.

ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.

"I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..."

But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically...

"That settles it. You ARE conscious!"
Privacy

California Sues Data-Harvesting Company NPD, Enforcing Strict Privacy Law (msn.com) 6

California sued to fine a data-harvesting company, reports the Washington Post, calling it "a rare step to put muscle behind one of the strongest online privacy laws in the United States." Even when states have tried to restrict data brokers, it has been tough to make those laws stick. That has generally been a problem for the 19 states that have passed broad laws to protect personal information, said Matt Schwartz, a policy analyst for Consumer Reports. He said there has been only 15 or so public enforcement actions by regulators overseeing all those laws. Partly because companies aren't held accountable, they're empowered to ignore the privacy standards. "Noncompliance is fairly widespread," Schwartz said. "It's a major problem."

That's why California is unusual with a data broker law that seems to have teeth. To make sure state residents can order all data brokers operating in the state to delete their personal records [with a single request], California is now requiring brokers to register with the state or face a fine of $200 a day. The state's privacy watchdog said Thursday that it filed litigation to force one data broker, National Public Data, to pay $46,000 for failing to comply with that initial phase of the data broker law. NPD declined to comment through an attorney... This first lawsuit for noncompliance, Schwartz said, shows that California is serious about making companies live up to their privacy obligations... "If they can successfully build it and show it works, it will create a blueprint for other states interested in this idea," he said.

Last summer NPD "spilled hundreds of millions of Americans' Social Security Numbers, addresses, and phone numbers online," according to the blog Krebs on Security, adding that another NPD data broker sharing access to the same consumer records "inadvertently published the passwords to its back-end database in a file that was freely available from its homepage..."

California's attempt to regulate the industry inspired the nonprofit Consumer Reports to create an app called Permission Slip that reveals what data companies collect and, for people in U.S. states, will "work with you to file a request, telling companies to stop selling your personal information."

Other data-protecting options suggested by The Washington Post:
  • Use Firefox, Brave or DuckDuckGo, "which can automatically tell websites not to sell or share your data. Those demands from the web browsers are legally binding or will be soon in at least nine states."
  • Use Privacy Badger, an EFF browser extension which the EFF says "automatically tells websites not to sell or share your data including where it's required by state law."

China

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 21

OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

AI

DeepSeek To Share Some AI Model Code (reuters.com) 17

Chinese startup DeepSeek will make its models' code publicly available, it said on Friday, doubling down on its commitment to open-source artificial intelligence. From a report: The company said in a post on social media platform X that it will open source 5 code repositories next week, describing the move as "small but sincere progress" that it will share "with full transparency."

"These humble building blocks in our online service have been documented, deployed and battle-tested in production." the post said. DeepSeek rattled the global AI industry last month when it released its open-source R1 reasoning model, which rivaled Western systems in performance while being developed at a lower cost.

Piracy

Meta Claims Torrenting Pirated Books Isn't Illegal Without Proof of Seeding (arstechnica.com) 192

An anonymous reader quotes a report from Ars Technica: Just because Meta admitted to torrenting a dataset of pirated books for AI training purposes, that doesn't necessarily mean that Meta seeded the file after downloading it, the social media company claimed in a court filing (PDF) this week. Evidence instead shows that Meta "took precautions not to 'seed' any downloaded files," Meta's filing said. Seeding refers to sharing a torrented file after the download completes, and because there's allegedly no proof of such "seeding," Meta insisted that authors cannot prove Meta shared the pirated books with anyone during the torrenting process.

[...] Meta ... is hoping to convince the court that torrenting is not in and of itself illegal, but is, rather, a "widely-used protocol to download large files." According to Meta, the decision to download the pirated books dataset from pirate libraries like LibGen and Z-Library was simply a move to access "data from a 'well-known online repository' that was publicly available via torrents." To defend its torrenting, Meta has basically scrubbed the word "pirate" from the characterization of its activity. The company alleges that authors can't claim that Meta gained unauthorized access to their data under CDAFA. Instead, all they can claim is that "Meta allegedly accessed and downloaded datasets that Plaintiffs did not create, containing the text of published books that anyone can read in a public library, from public websites Plaintiffs do not operate or own."

While Meta may claim there's no evidence of seeding, there is some testimony that might be compelling to the court. Previously, a Meta executive in charge of project management, Michael Clark, had testified (PDF) that Meta allegedly modified torrenting settings "so that the smallest amount of seeding possible could occur," which seems to support authors' claims that some seeding occurred. And an internal message (PDF) from Meta researcher Frank Zhang appeared to show that Meta allegedly tried to conceal the seeding by not using Facebook servers while downloading the dataset to "avoid" the "risk" of anyone "tracing back the seeder/downloader" from Facebook servers. Once this information came to light, authors asked the court for a chance to depose Meta executives again, alleging that new facts "contradict prior deposition testimony."
"Meta has been 'silent so far on claims about sharing data while 'leeching' (downloading) but told the court it plans to fight the seeding claims at summary judgement," notes Ars.
Censorship

FTC Launches Broad Tech 'Censorship' Probe Targeting Meta, Uber (yahoo.com) 201

The U.S. Federal Trade Commission has launched an inquiry into potential "censorship" by technology platforms ranging from Meta to Uber, marking an escalation in scrutiny of content moderation practices. FTC Chair Andrew Ferguson called for public comment on what he termed "Big Tech censorship," describing it as "un-American" and "potentially illegal."

The broad probe could examine social media, video sharing, ride-sharing and event planning services. The announcement follows long-standing Republican claims that conservative viewpoints face discrimination on social media platforms.
EU

WhatsApp Faces Tougher EU Rules As Users Top 45 Million (msn.com) 38

Meta's WhatsApp messaging service has surpassed 45 million users, earning the designation of a "Very Large Online Platform" under the EU's Digital Services Act. Bloomberg reports: WhatsApp's open channels, which are feeds affiliated with news outlets or public figures that under the DSA are comparable to a social network, averaged about 46.8 million monthly average users in the second half of 2024, Meta said in a filing on Feb. 14 that hasn't previously been reported. [...] The DSA content moderation rulebook imposes stricter requirements on very large online platforms, defined as those whose EU-based monthly active users exceed 45 million. Users of WhatsApp's core messaging feature do not count toward the designation under the DSA.

The commission would still need to rule that WhatsApp should be included in the more regulated tier. Under the DSA, very large online platforms must carry out risk assessments on the spread of illegal or harmful content, and put in place a mitigation strategy. Fines under the DSA can reach as much as 6% of a company's annual global sales. The DSA requires platforms to disclose user numbers every six months. Messaging service Telegram also published an update this week, saying that monthly EU users of its public channels are "significantly fewer than 45 million."

Software

'Uber For Armed Guards' Rushes To Market 72

An anonymous reader quotes a report from Gizmodo: Protector, an app that lets you book armed goons the same way you'd call for an Uber, is having a viral moment. The app started doing the rounds on social media after consultant Nikita Bier posted about it on X. Protector lets the user book armed guards on demand. Right now it's only available in NYC and LA. According to its marketing, every guard is either "active duty or retired law enforcement and military." Every booking comes with a motorcade and users get to select the number of Escalades that'll be joining them as well as the uniforms their hired goons will wear.

Protector is currently "#7 in Travel" on Apple's App Store. It's not available for people who use Android devices. [...] The marketing for Protector, which lives on its X account, is surreal. A series of robust and barrel-chested men in ill-fitting black suits deliver their credentials to the camera while sitting in front of a black background. They're all operators. They describe careers in SWAT teams and being deployed to war zones. They show vanity shots of themselves kitted out in operator gear. All of them have a red lapel pin bearing the symbol of Protector.
If the late UnitedHealthcare CEO had used Protector, he might still be alive today, suggests Protector in its marketing materials. A video on X shows "several fantasy versions of the assassination where a Protector is on hand to prevent the assassin from killing the CEO," reports Gizmodo.

The app is a product from parent company Protector Security Solutions, which was founded by Nick Sarath, a former product designer at Meta.
Technology

Chase Will Soon Block Zelle Payments To Sellers on Social Media (bleepingcomputer.com) 58

An anonymous reader shares a report: JPMorgan Chase Bank (Chase) will soon start blocking Zelle payments to social media contacts to combat a significant rise in online scams utilizing the service for fraud.

Zelle is a highly popular digital payments network that allows users to transfer money quickly and securely between bank accounts. It is also integrated into the mobile apps of many banks in the United States, allowing for almost instant transfers without requiring cash or checks but lacking one crucial feature: purchase protection.

Social Networks

Are Technologies of Connection Tearing Us Apart? (lareviewofbooks.org) 88

Nicholas Carr wrote The Shallows: What the Internet Is Doing to Our Brains. But his new book looks at how social media and digital communication technologies "are changing us individually and collectively," writes the Los Angeles Review of Books.

The book's title? Superbloom: How Technologies of Connection Tear Us Apart . But if these systems are indeed tearing us apart, the reasons are neither obvious nor simple. Carr suggests that this isn't really about the evil behavior of our tech overlords but about how we have "been telling ourselves lies about communication — and about ourselves.... Well before the net came along," says Carr, "[the] evidence was telling us that flooding the public square with more information from more sources was not going to open people's minds or engender more thoughtful discussions. It wasn't even going to make people better informed...."

At root, we're the problem. Our minds don't simply distill useful knowledge from a mass of raw data. They use shortcuts, rules of thumb, heuristic hacks — which is how we were able to think fast enough to survive on the savage savanna. We pay heed, for example, to what we experience most often. "Repetition is, in the human mind, a proxy for facticity," says Carr. "What's true is what comes out of the machine most often...." Reality can't compete with the internet's steady diet of novelty and shallow, ephemeral rewards. The ease of the user interface, congenial even to babies, creates no opportunity for what writer Antón Barba-Kay calls "disciplined acculturation."

Not only are these technologies designed to leverage our foibles, but we are also changed by them, as Carr points out: "We adapt to technology's contours as we adapt to the land's and the climate's." As a result, by designing technology, we redesign ourselves. "In engineering what we pay attention to, [social media] engineers [...] how we talk, how we see other people, how we experience the world," Carr writes. We become dislocated, abstracted: the self must itself be curated in memeable form. "Looking at screens made me think in screens," writes poet Annelyse Gelman. "Looking at pixels made me think in pixels...."

That's not to say that we can't have better laws and regulations, checks and balances. One suggestion is to restore friction into these systems. One might, for instance, make it harder to unreflectively spread lies by imposing small transactional costs, as has been proposed to ease the pathologies of automated market trading. An option Carr doesn't mention is to require companies to perform safety studies on their products, as we demand of pharmaceutical companies. Such measures have already been proposed for AI. But Carr doubts that increasing friction will make much difference. And placing more controls on social media platforms raises free speech concerns... We can't change or constrain the tech, says Carr, but we can change ourselves. We can choose to reject the hyperreal for the material. We can follow Samuel Johnson's refutation of immaterialism by "kicking the stone," reminding ourselves of what is real.

Social Networks

Despite Plans for AI-Powered Search, Reddit's Stock Fell 14% This Week (yahoo.com) 55

"Reddit Answers" uses generative AI to answer questions using what past Reddittors have posted. Announced in December, Reddit now plans to integrate it into their search results, reports TechCrunch, with Reddit's CEO saying the idea has "incredible monetization potential."

And yet Reddit's stock fell 14% this week. CNBC's headline? "Reddit shares plunge after Google algorithm change contributes to miss in user numbers." A Google search algorithm change caused some "volatility" with user growth in the fourth quarter, but the company's search-related traffic has since recovered in the first quarter, Reddit CEO Steve Huffman said in a letter to shareholders. "What happened wasn't unusual — referrals from search fluctuate from time to time, and they primarily affect logged-out users," Huffman wrote. "Our teams have navigated numerous algorithm updates and did an excellent job adapting to these latest changes effectively...." Reddit has said it is working to convince logged-out users to create accounts as logged-in users, which are more lucrative for its business.
As Yahoo Finance once pointed out, Reddit knew this day would come, acknowledging in its IPO filing that "changes in internet search engine algorithms and dynamics could have a negative impact on traffic for our website and, ultimately, our business." And in the last three months of 2024 Reddit's daily active users dropped, Yahoo Finance reported this week. But logged-in users increased by 400,000 — while logged-out users dropped by 600,000 (their first drop in almost two years).

Marketwatch notes that analyst Josh Beck sees this as a buying opportunity for Reddit's stock: Beck pointed to comments from Reddit's management regarding a sharp recovery in daily active unique users. That was likely driven by Google benefiting from deeper Reddit crawling, by the platform uncollapsing comments in search results and by a potential benefit from spam-reduction algorithm updates, according to the analyst. "While the report did not clear our anticipated bar, we walk away encouraged by international upside," he wrote.
AI

PIN AI Launches Mobile App Letting You Make Your Own Personalized, Private AI Model (venturebeat.com) 13

An anonymous reader quotes a report from VentureBeat: A new startup PIN AI (not to be confused with the poorly reviewed hardware device the AI Pin by Humane) has emerged from stealth to launch its first mobile app, which lets a user select an underlying open-source AI model that runs directly on their smartphone (iOS/Apple iPhone and Google Android supported) and remains private and totally customized to their preferences. Built with a decentralized infrastructure that prioritizes privacy, PIN AI aims to challenge big tech's dominance over user data by ensuring that personal AI serves individuals -- not corporate interests. Founded by AI and blockchain experts from Columbia, MIT and Stanford, PIN AI is led by Davide Crapis, Ben Wu and Bill Sun, who bring deep experience in AI research, large-scale data infrastructure and blockchain security. [...]

PIN AI introduces an alternative to centralized AI models that collect and monetize user data. Unlike cloud-based AI controlled by large tech firms, PIN AI's personal AI runs locally on user devices, allowing for secure, customized AI experiences without third-party surveillance. At the heart of PIN AI is a user-controlled data bank, which enables individuals to store and manage their personal information while allowing developers access to anonymized, multi-category insights -- ranging from shopping habits to investment strategies. This approach ensures that AI-powered services can benefit from high-quality contextual data without compromising user privacy. [...] The new mobile app launched in the U.S. and multiple regions also includes key features such as:

- The "God model" (guardian of data): Helps users track how well their AI understands them, ensuring it aligns with their preferences.
- Ask PIN AI: A personalized AI assistant capable of handling tasks like financial planning, travel coordination and product recommendations.
- Open-source integrations: Users can connect apps like Gmail, social media platforms and financial services to their personal AI, training it to better serve them without exposing data to third parties.
- "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."
Davide Crapis, co-founder of PIN AI, told VentureBeat that the app currently supports several open-source AI models, including small versions of DeepSeek and Meta's Llama. "With our app, you have a personal AI that is your model," Crapis added. "You own the weights, and it's completely private, with privacy-preserving fine-tuning."

You can sign up for early access to the PIN AI app here.
Linux

Lead Asahi Linux Developer Quits Days After Leaving Kernel Maintainer Role (theregister.com) 68

Hector Martin has resigned as the project lead of Asahi Linux, weeks after stepping down from his role as a Linux kernel maintainer for Apple ARM support. His departure from Asahi follows a contentious exchange with Linus Torvalds over development processes and social media advocacy. After quitting kernel maintenance earlier this month, the conflict escalated when Martin suggested that "shaming on social media" might be necessary to effect change.

Torvalds sharply rejected this approach, stating that "social media brigading just makes me not want to have anything at all to do with your approach" and suggested that Martin himself might be the problem. In his final resignation announcement from Asahi, Martin wrote: "I no longer have any faith left in the kernel development process or community management approach."

The dispute reflects deeper tensions in the Linux kernel community, particularly around the integration of Rust code. It follows the August departure of another key Rust for Linux maintainer, Wedson Almeida Filho from Microsoft. According to Sonatype's research, more than 300,000 open source projects have slowed or halted updates since 2020.
Social Networks

Reddit Plans To Lock Some Content Behind a Paywall This Year, CEO Says (arstechnica.com) 72

An anonymous reader shares a report: Reddit is planning to introduce a paywall this year, CEO Steve Huffman said during a videotaped Ask Me Anything (AMA) session on Thursday. Huffman previously showed interest in potentially introducing a new type of subreddit with "exclusive content or private areas" that Reddit users would pay to access.

When asked this week about plans for some Redditors to create "content that only paid members can see," Huffman said: "It's a work in progress right now, so that one's coming... We're working on it as we speak." When asked about "new, key features that you plan to roll out for Reddit in 2025," Huffman responded, in part: "Paid subreddits, yes."

Social Networks

Apple To Restore TikTok To US App Store Following Justice Department Letter (9to5mac.com) 69

According to Bloomberg's Mark Gurman, Apple will restore TikTok to the U.S. App Store on Thursday (source paywalled; alternative source), following a letter from U.S. Attorney General Pam Bondi. From the report: Apple, along with Alphabet's Google, removed TikTok in the US to comply with a law passed last year. In a Jan. 20 executive order, Trump said he instructed the attorney general "not to take any action to enforce the act for a period of 75 days from today to allow my administration an opportunity to determine the appropriate course forward." Apple confirmed the app will return "Thursday evening." You can find the App Store listing for TikTok here.

Developing...
Businesses

AI Licensing Deals With Google and OpenAI Make Up 10% of Reddit's Revenue (adweek.com) 27

Reddit's recent earnings report revealed that AI licensing deals with Google and OpenAI account for about 10% of its $1.3 billion revenue, totaling approximately $130 million. With Google paying $60 million, OpenAI is estimated to be paying Reddit around $70 million annually for content licensing. Adweek reports: "It's a small part of our revenue -- I'll call it 10%. For a business of our size, that's material, because it's valuable revenue," [said the company's COO Jen Wong]. The social platform -- which on Wednesday reported a 71% year-over-year lift in fourth-quarter revenue -- has been "very thoughtful" about the AI developers it chooses to work with, Wong said. To date, the company has inked two content licensing deals: one with Google for a reported $60 million, and one with ChatGPT parent OpenAI.

Reddit has elected to work only with partners who can agree to "specific terms ... that are really important to us." These terms include user privacy protections and conditions regarding "how [Reddit is] represented," Wong said. While licensing agreements with AI firms offer a valuable business opportunity for Reddit, advertising remains the company's core revenue driver. Much of Reddit's $427.7 million Q4 revenues were generated by the ongoing expansion of its advertising business. And its ad revenue as a whole grew 60% YoY, underscoring the platform's growing appeal to brands. [...]

Helping to accelerate ad revenue growth is Reddit's rising traffic. While Reddit's Q4 user growth came in under Wall Street projections, causing shares to dip, its weekly active uniques grew 42% YoY to over 379 million visitors. Average revenue per unique visitor was $4.21 during the quarter, up 23% from the prior year. While Google is "nicely reinforcing" Reddit's growth in traffic, Wong said, she added that the site's logged-in users, which have grown 27% year-over-year, are "the bedrock of our business."

AI

Scarlett Johansson Calls For Deepfake Ban After AI Video Goes Viral (people.com) 75

An anonymous reader quotes a report from People: Scarlett Johansson is urging U.S. legislators to place limits on artificial intelligence as an unauthorized, A.I.-generated video of her and other Jewish celebrities opposing Kanye West goes viral. The video, which has been circulating on social media, opens with an A.I. version of Johansson, 40, wearing a white T-shirt featuring a hand and its middle finger extended. In the center of the hand is a Star of David. The name "Kanye" is written underneath the hand.

The video contains A.I.-generated versions of over a dozen other Jewish celebrities, including Drake, Jerry Seinfeld, Steven Spielberg, Mark Zuckerberg, Jack Black, Mila Kunis and Lenny Kravitz. It ends with an A.I. Adam Sandler flipping his finger at the camera as the Jewish folk song "Hava Nagila" plays. The video ends with "Enough is Enough" and "Join the Fight Against Antisemitism." In a statement to PEOPLE, Johansson denounced what she called "the misuse of A.I., no matter what its messaging."
Johansson continued: "It has been brought to my attention by family members and friends, that an A.I.-generated video featuring my likeness, in response to an antisemitic view, has been circulating online and gaining traction. I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind. But I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it. We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality."

"I have unfortunately been a very public victim of A.I.," she added, "but the truth is that the threat of A.I. affects each and every one of us. There is a 1000-foot wave coming regarding A.I. that several progressive countries, not including the United States, have responded to in a responsible manner. It is terrifying that the U.S. government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of A.I."

The statement concluded, "I urge the U.S. government to make the passing of legislation limiting A.I. use a top priority; it is a bipartisan issue that enormously affects the immediate future of humanity at large."

Johansson has been outspoken about AI technology since its rise in popularity. Last year, she called out OpenAI for using an AI personal assistant voice that the actress claims sounds uncannily similar to her own.

Slashdot Top Deals