×
AI

Scammers' New Way of Targeting Small Businesses: Impersonating Them (wsj.com) 17

Copycats are stepping up their attacks on small businesses. Sellers of products including merino socks and hummingbird feeders say they have lost customers to online scammers who use the legitimate business owners' videos, logos and social-media posts to assume their identities and steer customers to cheap knockoffs or simply take their money. WSJ: "We used to think you'd be targeted because you have a brand everywhere," said Alastair Gray, director of anticounterfeiting for the International Trademark Association, a nonprofit that represents brand owners. "It now seems with the ease at which these criminals can replicate websites, they can cut and paste everything." Technology has expanded the reach of even the smallest businesses, making it easy to court customers across the globe. But evolving technology has also boosted opportunities for copycats; ChatGPT and other advances in artificial intelligence make it easier to avoid language or spelling errors, often a signal of fraud.

Imitators also have fine-tuned their tactics, including by outbidding legitimate brands for top position in search results. "These counterfeiters will market themselves just like brands market themselves," said Rachel Aronson, co-founder of CounterFind, a Dallas-based brand-protection company. Policing copycats is particularly challenging for small businesses with limited financial resources and not many employees. Online giants such as Amazon.com and Meta Platforms say they use technology to identify and remove misleading ads, fake accounts or counterfeit products.

Space

SpaceX Hopes to Eventually Build One Starship Per Day at Its Texas 'Starfactory' (space.com) 305

SpaceX's successful launch (and reentry) of Starship was just the beginning, reports Space.com: SpaceX now aims to build on the progress with its Starship program as continues work on Starfactory, a new manufacturing facility under construction at the company's Starbase site in South Texas... "When you step into this factory, it is truly inspirational. My heart jumps out of my chest," Kate Tice, manager of SpaceX Quality Systems Engineering, said [during SpaceX's livestream of the Starship flight test]. "Now this will enable us to increase our production rate significantly as we build toward our long-term goal of producing one Ship per day and coming off the production line soon, Starship Version Two."

This new version of Starship is designed to be more easy to mass produce, SpaceX CEO Elon Musk said on social media.

Space.com argues that the long-term expansion comes as SpaceX "looks to use Starship to eventually make humanity interplanetary."
Encryption

Researcher Finds Side-Channel Vulnerability in Post-Quantum Key Encapsulation Mechanism (thecyberexpress.com) 12

Slashdot reader storagedude shared this report from The Cyber Express: A security researcher discovered an exploitable timing leak in the Kyber key encapsulation mechanism (KEM) that's in the process of being adopted by NIST as a post-quantum cryptographic standard. Antoon Purnal of PQShield detailed his findings in a blog post and on social media, and noted that the problem has been fixed with the help of the Kyber team. The issue was found in the reference implementation of the Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM) that's in the process of being adopted as a NIST post-quantum key encapsulation standard. "A key part of implementation security is resistance against side-channel attacks, which exploit the physical side-effects of cryptographic computations to infer sensitive information," Purnal wrote.

To secure against side-channel attacks, cryptographic algorithms must be implemented in a way so that "no attacker-observable effect of their execution depends on the secrets they process," he wrote. In the ML-KEM reference implementation, "we're concerned with a particular side channel that's observable in almost all cryptographic deployment scenarios: time." The vulnerability can occur when a compiler optimizes the code, in the process silently undoing "measures taken by the skilled implementer." In Purnal's analysis, the Clang compiler was found to emit a vulnerable secret-dependent branch in the poly_frommsg function of the ML-KEM reference code needed in both key encapsulation and decapsulation, corresponding to the expand_secure implementation.

While the reference implementation was patched, "It's important to note that this does not rule out the possibility that other libraries, which are based on the reference implementation but do not use the poly_frommsg function verbatim, may be vulnerable — either now or in the future," Purnal wrote.

Purnal also published a proof-of-concept demo on GitHub. "On an Intel Core i7-13700H, it takes between 5-10 minutes to leak the entire ML-KEM 512 secret key using end-to-end decapsulation timing measurements."
AI

Artists Are Deleting Instagram For New App Cara In Protest of Meta AI Scraping (fastcompany.com) 21

Some artists are jumping ship for the anti-AI portfolio app Cara after Meta began using Instagram content to train its AI models. Fast Company explains: The portfolio app bills itself as a platform that protects artists' images from being used to train AI, and only allowing AI content to be posted if it's clearly labeled. Based on the number of new users the Cara app has garnered over the past few days, there seems to be a need. Between May 31 and June 2, Cara's user base tripled from less than 100,000 to more than 300,000 profiles, skyrocketing to the top of the app store. [...] Cara is a social networking app for creatives, in which users can post images of their artwork, memes, or just their own text-based musings. It shares similarities with major social platforms like X (formerly Twitter) and Instagram on a few fronts. Users can access Cara through a mobile app or on a browser. Both options are free to use. The UI itself is like an arts-centric combination of X and Instagram. In fact, some UI elements seem like they were pulled directly from other social media sites. (It's not the most innovative approach, but it is strategic: as a new app, any barriers to potential adoption need to be low).

Cara doesn't train any AI models on its content, nor does it allow third parties to do so. According to Cara's FAQ page, the app aims to protect its users from AI scraping by automatically implementing "NoAI" tags on all of its posts. The website says these tags "are intended to tell AI scrapers not to scrape from Cara." Ultimately, they appear to be html metadata tags that politely ask bad actors not to get up to any funny business, and it's pretty unlikely that they hold any actual legal weight. Cara admits as much, too, warning its users that the tags aren't a "fully comprehensive solution and won't completely prevent dedicated scrapers." With that in mind, Cara assesses the "NoAI" tagging system as a "a necessary first step in building a space that is actually welcoming to artists -- one that respects them as creators and doesn't opt their work into unethical AI scraping without their consent."

In December, Cara launched another tool called Cara Glaze to defend its artists' work against scrapers. (Users can only use it a select number of times.) Glaze, developed by the SAND Lab at University of Chicago, makes it much more difficult for AI models to accurately understand and mimic an artist's personal style. The tool works by learning how AI bots perceive artwork, and then making a set of minimal changes that are invisible to the human eye but confusing to the AI model. The AI bot then has trouble "translating" the art style and generates warped recreations. In the future, Cara also plans to implement Nightshade, another University of Chicago software that helps protect artwork against AI scapers. Nightshade "poisons" AI training data by adding invisible pixels to artwork that can cause AI software to completely misunderstand the image. Beyond establishing shields against data mining, Cara also uses a third party service to detect and moderate any AI artwork that's posted to the site. Non-human artwork is forbidden, unless it's been properly labeled by the poster.

The Internet

Remote Amazon Tribe Connects To Internet, Gets Addicted To Porn and Social Media 96

The Marubo people, an isolated Indigenous tribe in the Amazon, have gained high-speed internet access through Elon Musk's Starlink service, drastically altering their traditional way of life. While the internet has brought significant benefits like improved communication and emergency response, it has also introduced challenges such as social media addiction, exposure to inappropriate content, and cultural erosion. The New York Times reports: After only nine months with Starlink, the Marubo are already grappling with the same challenges that have racked American households for years: teenagers glued to phones; group chats full of gossip; addictive social networks; online strangers; violent video games; scams; misinformation; and minors watching pornography. Modern society has dealt with these issues over decades as the internet continued its relentless march. The Marubo and other Indigenous tribes, who have resisted modernity for generations, are now confronting the internet's potential and peril all at once, while debating what it will mean for their identity and culture.

The internet was an immediate sensation. "It changed the routine so much that it was detrimental," [admitted one Marubo leader, Enoque Marubo]. "In the village, if you don't hunt, fish and plant, you don't eat." Leaders realized they needed limits. The internet would be switched on for only two hours in the morning, five hours in the evening, and all day Sunday. During those windows, many Marubo are crouched over or reclined in hammocks on their phones. They spend lots of time on WhatsApp. There, leaders coordinate between villages and alert the authorities to health issues and environmental destruction. Marubo teachers share lessons with students in different villages. And everyone is in much closer contact with faraway family and friends. To Enoque, the biggest benefit has been in emergencies. A venomous snake bite can require swift rescue by helicopter. Before the internet, the Marubo used amateur radio, relaying a message between several villages to reach the authorities. The internet made such calls instantaneous. "It's already saved lives," he said.

In April, seven months after Starlink's arrival, more than 200 Marubo gathered in a village for meetings. Enoque brought a projector to show a video about bringing Starlink to the villages. As proceedings began, some leaders in the back of the audience spoke up. The internet should be turned off for the meetings, they said. "I don't want people posting in the groups, taking my words out of context," another said. During the meetings, teenagers swiped through Kwai, a Chinese-owned social network. Young boys watched videos of the Brazilian soccer star Neymar Jr. And two 15-year-old girls said they chatted with strangers on Instagram. One said she now dreamed of traveling the world, while the other wants to be a dentist in Sao Paulo. This new window to the outside world had left many in the tribe feeling torn. "Some young people maintain our traditions," said TamaSay Marubo, 42, the tribe's first woman leader. "Others just want to spend the whole afternoon on their phones."
Social Networks

Israel Reportedly Uses Fake Social Media Accounts To Influence US Lawmakers On Gaza War (nytimes.com) 146

An anonymous reader quotes a report from the New York Times: Israel organized and paid for an influence campaign last year targeting U.S. lawmakers and the American public with pro-Israel messaging, as it aimed to foster support for its actions in the war with Gaza, according to officials involved in the effort and documents related to the operation. The covert campaign was commissioned by Israel's Ministry of Diaspora Affairs, a government body that connects Jews around the world with the State of Israel, four Israeli officials said. The ministry allocated about $2 million to the operation and hired Stoic, a political marketing firm in Tel Aviv, to carry it out, according to the officials and the documents. The campaign began in October and remains active on the platform X. At its peak, it used hundreds of fake accounts that posed as real Americans on X, Facebook and Instagram to post pro-Israel comments. The accounts focused on U.S. lawmakers, particularly ones who are Black and Democrats, such as Representative Hakeem Jeffries, the House minority leader from New York, and Senator Raphael Warnock of Georgia, with posts urging them to continue funding Israel's military.

ChatGPT, the artificial intelligence-powered chatbot, was used to generate many of the posts. The campaign also created three fake English-language news sites featuring pro-Israel articles. The Israeli government's connection to the influence operation, which The New York Times verified with four current and former members of the Ministry of Diaspora Affairs and documents about the campaign, has not previously been reported. FakeReporter, an Israeli misinformation watchdog, identified the effort in March. Last week, Meta, which owns Facebook and Instagram, and OpenAI, which makes ChatGPT, said they had also found and disrupted the operation. The secretive campaign signals the lengths Israel was willing to go to sway American opinion on the war in Gaza.

Facebook

Meta Withheld Information on Instagram, WhatsApp Deals, FTC Says (yahoo.com) 9

Meta Platforms withheld information from federal regulators during their original reviews of the Instagram and WhatsApp acquisitions, the US Federal Trade Commission said in a court filing as part of a lawsuit seeking to break up the social networking giant. From a report: In its filing Tuesday, however, the FTC said the case involves "information Meta had in its files and did not provide" during the original reviews. "At Meta's request the FTC undertook only a limited review" of the deals, the agency said. "The FTC now has available vastly more evidence, including pre-acquisition documents Meta did not provide in 2012 and 2014."

Meta said that it met all of its legal obligations during the Instagram and WhatsApp merger reviews. The FTC has failed to provide evidence to support its claims, a spokesperson said. "The evidence instead shows that Meta faces fierce competition and that Meta's significant investment of time and resources in Instagram and WhatsApp has benefited consumers by making the apps into the services millions of users enjoy today for free," spokesperson Chris Sgro said in a statement. "The FTC has done nothing to build its case over the past four years, while Meta has invested billions to build quality products."

The Internet

Internet Addiction Alters Brain Chemistry In Young People, Study Finds (theguardian.com) 59

An anonymous reader quotes a report from The Guardian: Young people with internet addiction experience changes in their brain chemistry which could lead to more addictive behaviors, research suggests. The study, published in PLOS Mental Health, reviewed previous research using functional magnetic resonance imaging (fMRI) to examine how regions of the brain interact in people with internet addiction.

They found that the effects were evident throughout multiple neural networks in the brains of young people, and that there was increased activity in parts of the brain when participants were resting. At the same time, there was an overall decrease in the functional connectivity in parts of the brain involved in active thinking, which is the executive control network of the brain responsible for memory and decision-making. The research found that these changes resulted in addictive behaviors and tendencies in adolescents, as well as behavioral changes linked to mental health, development, intellectual ability and physical coordination.
"Adolescence is a crucial developmental stage during which people go through significant changes in their biology, cognition and personalities," said Max Chang, the study's lead author and an MSc student at the UCL Great Ormond Street Institute of Child Health (GOS ICH). "As a result, the brain is particularly vulnerable to internet addiction-related urges during this time, such as compulsive internet usage, cravings towards usage of the mouse or keyboard and consuming media. The findings from our study show that this can lead to potentially negative behavioral and developmental changes that could impact the lives of adolescents. For example, they may struggle to maintain relationships and social activities, lie about online activity and experience irregular eating and disrupted sleep."

Chang said he hopes the findings allow early signs of internet addiction to be treated effectively. "Clinicians could potentially prescribe treatment to aim at certain brain regions or suggest psychotherapy or family therapy targeting key symptoms of internet addiction," said Chang. "Importantly, parental education on internet addiction is another possible avenue of prevention from a public health standpoint. Parents who are aware of the early signs and onset of internet addiction will more effectively handle screen time, impulsivity, and minimize the risk factors surrounding internet addiction."
AI

ChatGPT, Claude and Perplexity All Went Down At the Same Time (techcrunch.com) 29

Sarah Perez reports via TechCrunch: After a multi-hour outage that took place in the early hours of the morning, OpenAI's ChatGPT chatbot went down again -- but this time, it wasn't the only AI provider affected. On Tuesday morning, both Anthropic's Claude and Perplexity began seeing issues, too, but these were more quickly resolved. Google's Gemini appears to be operating at present, though it may have also briefly gone offline, according to some user reports.

It's unusual for three major AI providers to all be down at the same time, which could signal a broader infrastructure issue or internet-scale problem, such as those that affect multiple social media sites simultaneously, for example. It's also possible that Claude and Perplexity's issues were not due to bugs or other issues, but from receiving too much traffic in a short period of time due to ChatGPT's outage.

China

The Chinese Internet Is Shrinking (nytimes.com) 88

An anonymous reader shares a report: Chinese people know their country's internet is different. There is no Google, YouTube, Facebook or Twitter. They use euphemisms online to communicate the things they are not supposed to mention. When their posts and accounts are censored, they accept it with resignation. They live in a parallel online universe. They know it and even joke about it. Now they are discovering that, beneath a facade bustling with short videos, livestreaming and e-commerce, their internet -- and collective online memory -- is disappearing in chunks.

A post on WeChat on May 22 that was widely shared reported that nearly all information posted on Chinese news portals, blogs, forums, social media sites between 1995 and 2005 was no longer available. "The Chinese internet is collapsing at an accelerating pace," the headline said. Predictably, the post itself was soon censored. It's impossible to determine exactly how much and what content has disappeared. [...] In addition to disappearing content, there's a broader problem: China's internet is shrinking. There were 3.9 million websites in China in 2023, down more than a third from 5.3 million in 2017, according to the country's internet regulator.

Security

Crooks Threaten To Leak 3 Billion Personal Records 'Stolen From Background Firm' (theregister.com) 67

An anonymous reader quotes a report from The Register: Billions of records detailing people's personal information may soon be dumped online after being allegedly obtained from a Florida firm that handles background checks and other requests for folks' private info. A criminal gang that goes by the handle USDoD put the database up for sale for $3.5 million on an underworld forum in April, and rather incredibly claimed the trove included 2.9 billion records on all US, Canadian, and British citizens. It's believed one or more miscreants using the handle SXUL was responsible for the alleged exfiltration, who passed it onto USDoD, which is acting as a broker. The pilfered information is said to include individuals' full names, addresses, and address history going back at least three decades, social security numbers, and people's parents, siblings, and relatives, some of whom have been dead for nearly 20 years. According to USDoD, this info was not scraped from public sources, though there may be duplicate entries for people in the database.

Fast forward to this month, and the infosec watchers at VX-Underground say they've not only been able to view the database and verify that at least some of its contents are real and accurate, but that USDoD plans to leak the trove. Judging by VX-Underground's assessment, the 277.1GB file contains nearly three billion records on people who've at least lived in the United States -- so US citizens as well as, say, Canadians and Brits. This info was allegedly stolen or otherwise obtained from National Public Data, a small information broker based in Coral Springs that offers API lookups to other companies for things like background checks. There is a small silver lining, according to the VX team: "The database DOES NOT contain information from individuals who use data opt-out services. Every person who used some sort of data opt-out service was not present." So, we guess this is a good lesson in opting out.

Social Networks

New York Set to Restrict Social-Media Algorithms for Teens (cnbc.com) 63

Lawmakers in New York have reached a tentative agreement to "prohibit social-media companies from using algorithms to steer content to children without parental consent (source paywalled; alternative source)," according to the Wall Street Journal. "The legislation is aimed at preventing social-media companies from serving automated feeds to minors. The bill, which is still being completed but expected to be voted on this week, also would prohibit platforms from sending minors notifications during overnight hours without parental consent."

Meanwhile, the results of New York's first mental health report were released today, finding that depression and anxiety are rampant among NYC's teenagers, "with nearly half of them experiencing symptoms from one of both in recent years," reports NBC New York. "In a recent survey conducted last year, 48% of teenagers reported feeling depressive symptoms ranging from mild to severe. The vast majority, however, reported feeling high levels of resilience. Frequent coping mechanisms include listening to music and using social media."
Microsoft

Is the New 'Recall' Feature in Windows a Security and Privacy Nightmare? (thecyberexpress.com) 140

Slashdot reader storagedude shares a provocative post from the cybersecurity news blog of Cyble Inc. (a Ycombinator-backed company promising "AI-powered actionable threat intelligence").

The post delves into concerns that the new "Recall" feature planned for Windows (on upcoming Copilot+ PCs) is "a security and privacy nightmare." Copilot Recall will be enabled by default and will capture frequent screenshots, or "snapshots," of a user's activity and store them in a local database tied to the user account. The potential for exposure of personal and sensitive data through the new feature has alarmed security and privacy advocates and even sparked a UK inquiry into the issue. In a long Mastodon thread on the new feature, Windows security researcher Kevin Beaumont wrote, "I'm not being hyperbolic when I say this is the dumbest cybersecurity move in a decade. Good luck to my parents safely using their PC."

In a blog post on Recall security and privacy, Microsoft said that processing and storage are done only on the local device and encrypted, but even Microsoft's own explanations raise concerns: "Note that Recall does not perform content moderation. It will not hide information such as passwords or financial account numbers. That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry." Security and privacy advocates take issue with assertions that the data is stored securely on the local device. If someone has a user's password or if a court orders that data be turned over for legal or law enforcement purposes, the amount of data exposed could be much greater with Recall than would otherwise be exposed... And hackers, malware and infostealers will have access to vastly more data than they would without Recall.

Beaumont said the screenshots are stored in a SQLite database, "and you can access it as the user including programmatically. It 100% does not need physical access and can be stolen.... Recall enables threat actors to automate scraping everything you've ever looked at within seconds."

Beaumont's LinkedIn profile and blog say that starting in 2020 he worked at Microsoft for nearly a year as a senior threat intelligence analyst. And now Beaumont's Mastodon post is also raising other concerns (according to Cyble's blog post):
  • "Sensitive data deleted by users will still be saved in Recall screenshots... 'If you or a friend use disappearing messages in WhatsApp, Signal etc, it is recorded regardless.'"
  • "Beaumont also questioned Microsoft's assertion that all this is done locally."

The blog post also notes that Leslie Carhart, Director of Incident Response at Dragos, had this reaction to Beaumont's post. "The outrage and disbelief are warranted."


Government

Did the US Government Ignore a Chance to Make TikTok Safer? (yahoo.com) 59

"To save itself, TikTok in 2022 offered the U.S. government an extraordinary deal," reports the Washington Post. The video app, owned by a Chinese company, said it would let federal officials pick its U.S. operation's board of directors, would give the government veto power over each new hire and would pay an American company that contracts with the Defense Department to monitor its source code, according to a copy of the company's proposal. It even offered to give federal officials a kill switch that would shut the app down in the United States if they felt it remained a threat.

The Biden administration, however, went its own way. Officials declined the proposal, forfeiting potential influence over one of the world's most popular apps in favor of a blunter option: a forced-sale law signed last month by President Biden that could lead to TikTok's nationwide ban. The government has never publicly explained why it rejected TikTok's proposal, opting instead for a potentially protracted constitutional battle that many expect to end up before the Supreme Court... But the extent to which the United States evaluated or disregarded TikTok's proposal, known as Project Texas, is likely to be a core point of dispute in court, where TikTok and its owner, ByteDance, are challenging the sale-or-ban law as an "unconstitutional assertion of power."

The episode raises questions over whether the government, when presented with a way to address its concerns, chose instead to back an effort that would see the company sold to an American buyer, even though some of the issues officials have warned about — the opaque influence of its recommendation algorithm, the privacy of user data — probably would still be unresolved under new ownership...

A senior Biden administration official said in a statement that the administration "determined more than a year ago that the solution proposed by the parties at the time would be insufficient to address the serious national security risks presented. While we have consistently engaged with the company about our concerns and potential solutions, it became clear that divestment from its foreign ownership was and remains necessary."

"Since federal officials announced an investigation into TikTok in 2019, the app's user base has doubled to more than 170 million U.S. accounts," according to the article.

It also includes this assessment from Anupam Chander, a Georgetown University law professor who researches international tech policy. "The government had a complete absence of faith in [its] ability to regulate technology platforms, because there might be some vulnerability that might exist somewhere down the line."
AI

Could AI Replace CEOs? (msn.com) 132

'"As AI programs shake up the office, potentially making millions of jobs obsolete, one group of perpetually stressed workers seems especially vulnerable..." writes the New York Times.

"The chief executive is increasingly imperiled by A.I." These employees analyze new markets and discern trends, both tasks a computer could do more efficiently. They spend much of their time communicating with colleagues, a laborious activity that is being automated with voice and image generators. Sometimes they must make difficult decisions — and who is better at being dispassionate than a machine?

Finally, these jobs are very well paid, which means the cost savings of eliminating them is considerable...

This is not just a prediction. A few successful companies have begun to publicly experiment with the notion of an A.I. leader, even if at the moment it might largely be a branding exercise... [The article gives the example of the Chinese online game company NetDragon Websoft, which has 5,000 employees, and the upscale Polish rum company Dictador.]

Chief executives themselves seem enthusiastic about the prospect — or maybe just fatalistic. EdX, the online learning platform created by administrators at Harvard and M.I.T. that is now a part of publicly traded 2U Inc., surveyed hundreds of chief executives and other executives last summer about the issue. Respondents were invited to take part and given what edX called "a small monetary incentive" to do so. The response was striking. Nearly half — 47 percent — of the executives surveyed said they believed "most" or "all" of the chief executive role should be completely automated or replaced by A.I. Even executives believe executives are superfluous in the late digital age...

The pandemic prepared people for this. Many office workers worked from home in 2020, and quite a few still do, at least several days a week. Communication with colleagues and executives is done through machines. It's just a small step to communicating with a machine that doesn't have a person at the other end of it. "Some people like the social aspects of having a human boss," said Phoebe V. Moore, professor of management and the futures of work at the University of Essex Business School. "But after Covid, many are also fine with not having one."

The article also notes that a 2017 survey of 1,000 British workers found 42% saying they'd be "comfortable" taking orders from a computer.
Advertising

How Misinformation Spreads? It's Funded By 'The Hellhole of Programmatic Advertising' (wired.com) 66

Journalist Steven Brill has written a new book called The Death of Truth. Its subtitle? "How Social Media and the Internet Gave Snake Oil Salesmen and Demagogues the Weapons They Needed to Destroy Trust and Polarize the World-And What We Can Do."

An excerpt published by Wired points out that last year around the world, $300 billion was spent on "programmatic advertising", and $130 billion was spent in the United States alone in 2022. The problem? For over a decade there's been "brand safety" technology, the article points out — but "what artificial intelligence could not do was spot most forms of disinformation and misinformation..."

The end result... In 2019, other than the government of Vladimir Putin, Warren Buffett was the biggest funder of Sputnik News, the Russian disinformation website controlled by the Kremlin... Geico, the giant American insurance company and subsidiary of Buffett's Berkshire Hathaway, was the leading advertiser on the American version of Sputnik News' global website network... No one at Geico or its advertising agency had any idea its ads would appear on Sputnik, let alone what anti-American content would be displayed alongside the ads. How could they? Which person or army of people at Geico or its agency could have read 44,000 websites?

Geico's ads had been placed through a programmatic advertising system that was invented in the late 1990s as the internet developed. It exploded beginning in the mid 2000s and is now the overwhelmingly dominant advertising medium. Programmatic algorithms, not people, decide where to place most of the ads we now see on websites, social media platforms, mobile devices, streaming television, and increasingly hear on podcasts... If Geico's advertising campaign were typical of programmatic campaigns for broad-based consumer products and services, each of its ads would have been placed on an average of 44,000 websites, according to a study done for the leading trade association of big-brand advertisers.

Geico is hardly the only rock-solid American brand to be funding the Russians. During the same period that the insurance company's ads appeared on Sputnik News, 196 other programmatic advertisers bought ads on the website, including Best Buy, E-Trade, and Progressive insurance. Sputnik News' sister propaganda outlet, RT.com (it was once called Russia Today until someone in Moscow decided to camouflage its parentage), raked in ad revenue from Walmart, Amazon, PayPal, and Kroger, among others... Almost all advertising online — and even much of it on television (through streaming TV), or on podcasts, radio, mobile devices, and electronic billboards — is now done programmatically, which means the machine, not a planner, makes those placement decisions. Unless the advertiser uses special tools, such as what are called exclusion or inclusion lists, the publishers and content around which the ad appears, and which the ad is financing, are no longer part of the decision.

"What I kept hearing as the professionals explained it to me was that the process is like a stock exchange, except that the buyer doesn't know what stock he is buying... the advertiser and its ad agency have no idea where among thousands of websites its ad will appear."
AI

Journalists 'Deeply Troubled' By OpenAI's Content Deals With Vox, The Atlantic (arstechnica.com) 100

Benj Edwards and Ashley Belanger reports via Ars Technica: On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers -- and the unions that represent them -- were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."

The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.."

Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI.

Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.

Social Networks

TikTok Preparing a US Copy of the App's Core Algorithm (reuters.com) 57

An anonymous reader quotes a report from Reuters: TikTok is working on a clone of its recommendation algorithm for its 170 million U.S. users that may result in a version that operates independently of its Chinese parent and be more palatable to American lawmakers who want to ban it, according to sources with direct knowledge of the efforts. The work on splitting the source code ordered by TikTok's Chinese parent ByteDance late last year predated a bill to force a sale of TikTok's U.S. operations that began gaining steam in Congress this year. The bill was signed into law in April. The sources, who were granted anonymity because they are not authorized to speak publicly about the short-form video sharing app, said that once the code is split, it could lay the groundwork for a divestiture of the U.S. assets, although there are no current plans to do so. The company has previously said it had no plans to sell the U.S. assets and such a move would be impossible. [...]

In the past few months, hundreds of ByteDance and TikTok engineers in both the U.S. and China were ordered to begin separating millions of lines of code, sifting through the company's algorithm that pairs users with videos to their liking. The engineers' mission is to create a separate code base that is independent of systems used by ByteDance's Chinese version of TikTok, Douyin, while eliminating any information linking to Chinese users, two sources with direct knowledge of the project told Reuters. [...] The complexity of the task that the sources described to Reuters as tedious "dirty work" underscores the difficulty of splitting the underlying code that binds TikTok's U.S. operations to its Chinese parent. The work is expected to take over a year to complete, these sources said. [...] At one point, TikTok executives considered open sourcing some of TikTok's algorithm, or making it available to others to access and modify, to demonstrate technological transparency, the sources said.

Executives have communicated plans and provided updates on the code-splitting project during a team all-hands, in internal planning documents and on its internal communications system, called Lark, according to one of the sources who attended the meeting and another source who has viewed the messages. Compliance and legal issues involved with determining what parts of the code can be carried over to TikTok are complicating the work, according to one source. Each line of code has to be reviewed to determine if it can go into the separate code base, the sources added. The goal is to create a new source code repository for a recommendation algorithm serving only TikTok U.S. Once completed, TikTok U.S. will run and maintain its recommendation algorithm independent of TikTok apps in other regions and its Chinese version Douyin. That move would cut it off from the massive engineering development power of its parent company in Beijing, the sources said. If TikTok completes the work to split the recommendation engine from its Chinese counterpart, TikTok management is aware of the risk that TikTok U.S. may not be able to deliver the same level of performance as the existing TikTok because it is heavily reliant on ByteDance's engineers in China to update and maintain the code base to maximize user engagement, sources added.

AI

OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity' (reuters.com) 16

An anonymous reader quotes a report from Reuters: Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet. The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months. These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others.

The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet.
In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.
United States

New York Governor To Launch Bill Banning Smartphones in Schools (theguardian.com) 113

The New York governor, Kathy Hochul, plans to introduce a bill banning smartphones in schools, the latest in a series of legislative moves aimed at online child safety by New York's top official. From a report: "I have seen these addictive algorithms pull in young people, literally capture them and make them prisoners in a space where they are cut off from human connection, social interaction and normal classroom activity," she said. Hochul said she would launch the bill later this year and take it up in New York's next legislative session, which begins in January 2025. If passed, schoolchildren will be allowed to carry simple phones that cannot access the internet but do have the capability to send texts, which has been a sticking point for parents. She did not offer specifics on enforcing the prohibition. "Parents are very anxious about mass shootings in school," she said. "Parents want the ability to have some form of connection in an emergency situation." The smartphone-ban bill will follow two others Hochul is pushing that outline measures to safeguard children's privacy online and limit their access to certain features of social networks.

Slashdot Top Deals