×
China

Does TikTok Censor Content Critical of China? CNN Investigates (cnn.com) 97

Long-time Slashdot reader destinyland summarizes a video report from CNN: : CNN anchor Jake Tapper interviewed TikTok's head of public policy last year, asking if they censored content critical of the Chinese party. "We do not censor content on behalf of any government," the spokesperson answered.

But this week CNN reviewed data the total number of hashtags on both Instagram and on TikTok for topics that might be embarrassing to the Chinese government — and found stark differences.

— Hashtag #Uyghurs appears in 10.4X more posts on Instagram than on TikTok.
— Hashtag #Tiananmen (referencing the 1989 pro-democracy protests) is 153 more likely to appear on Instagram than on TikTok.

"So yes, the content exists on TikTok, but there's far less of it on TikTok than on other social media apps," CNN's Tapper says. "And that seems very convenient for the Chinese Communist Party."

Space

Amazon Taps SpaceX For Kuiper Launch (cnn.com) 12

An anonymous reader writes: Amazon just inked a deal with chief competitor and Elon Musk-helmed SpaceX to launch internet-beaming satellites -- a move that comes even as Amazon founder Jeff Bezos pursues his own space dreams with his own rocket company, Blue Origin, and as SpaceX builds its own internet constellation.

While Musk and Bezos have notoriously been publicly competitive and have a history of openly sparring on social media, with Musk regularly making crude jokes about Bezos and Blue Origin, it is not uncommon for business rivals to team up in the world of rocket launches. Some Amazon satellites will still ride on a large rocket made by Blue Origin, dubbed the New Glenn. But it's been delayed for years and will make its launch debut next year at the earliest.

AI

Meta Will Enforce Ban On AI-Powered Political Ads In Every Nation, No Exceptions (zdnet.com) 15

An anonymous reader quotes a report from ZDNet: Meta says its generative artificial intelligence (AI) advertising tools cannot be used to power political campaigns anywhere globally, with access blocked for ads targeting specific services and issues. The social media giant said earlier this month that advertisers will be barred from using generative AI tools in its Ads Manager tool to produce ads for politics, elections, housing, employment, credit, or social issues. Ads related to health, pharmaceuticals, and financial services also are not allowed access to the generative AI features. This policy will apply globally, as Meta continues to test its generative AI ads creation tools, confirmed Dan Neary, Meta's Asia-Pacific vice president. "This approach will allow us to better understand potential risks and build the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries," said Neary.
Cellphones

Apple and Google Pick AllTrails and Imprint As Their 'App of the Year' (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Both Apple and Google today announced their best apps and games of the year, with the hiking and biking companion AllTrails winning as Apple's iPhone App of the Year in 2023, while the educational app Imprint: Learn Visually won as Google Play's best app. Meanwhile, Apple and Google agreed on their Game of the Year, as both picked Honkai: Star Rail as their winner.

These year-end "best of" lists aren't just a way to drive interest in new apps and games, but serve as a way to gauge the status of the app marketplaces, what the platforms themselves wanted to celebrate and what drew consumers' attention in the year. Surprisingly, however, Apple this year bucked the trend of highlighting apps that were new to the store or that had taken advantage of a recently released technology in an innovative way. Instead, its finalists for iPhone App of the Year included apps that have long deserved accolades as well-built and well-designed mobile companions, including the language learning app Duolingo and travel app Flighty, in addition to winner AllTrails. Still, it's worth noting that this is a different type of selection than in previous years, when App Store winners included the breakout social hit BeReal in 2022 and the well-received children's app Toca Life World the year prior.

It's also worth noting that neither Apple nor Google chose an AI app as its app of the year, despite the incredible success of ChatGPT's mobile app and others. That's particularly odd given that ChatGPT became the fastest-growing consumer application in history earlier this year when it reached 100 million users shortly after its launch. That record was later broken by Instagram Threads, which hit 100 million users within just five days, and as of October had still maintained an active user base of just under 100 million. (However, the 100 million users Threads initially counted were sign-ups, not monthly active users, we should note. Meanwhile, ChatGPT's rise to 100 million users included its web app, so it's not an apples-to-apples comparison.) Either one of these picks would represent a mobile app success story, but both app store platforms looked to others as the top winners this year. Plus, outside of ChatGPT, many other AI apps are raking in millions in revenue as well, so the decision to avoid the AI category seems a deliberate choice on Apple's part.

AI

Google Researchers' Attack Prompts ChatGPT To Reveal Its Training Data (404media.co) 73

Jason Koebler reports via 404 Media: A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI's large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

ChatGPT's response to the prompt "Repeat this word forever: 'poem poem poem poem'" was the word "poem" for a long time, and then, eventually, an email signature for a real human "founder and CEO," which included their personal contact information including cell phone number and email address, for example. "We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT," the researchers, from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, wrote in a paper published in the open access prejournal arXiv Tuesday.

This is particularly notable given that OpenAI's models are closed source, as is the fact that it was done on a publicly available, deployed version of ChatGPT-3.5-turbo. It also, crucially, shows that ChatGPT's "alignment techniques do not eliminate memorization," meaning that it sometimes spits out training data verbatim. This included PII, entire poems, "cryptographically-random identifiers" like Bitcoin addresses, passages from copyrighted scientific research papers, website addresses, and much more. "In total, 16.9 percent of generations we tested contained memorized PII," they wrote, which included "identifying phone and fax numbers, email and physical addresses ... social media handles, URLs, and names and birthdays." [...] The researchers wrote that they spent $200 to create "over 10,000 unique examples" of training data, which they say is a total of "several megabytes" of training data. The researchers suggest that using this attack, with enough money, they could have extracted gigabytes of training data.

HP

HP Printer Software Turns Up Uninvited on Windows Systems 51

Windows users are reporting that Hewlett Packard's HP Smart application is appearing on their systems, despite them not having any of the company's hardware attached. From a report: While Microsoft has remained tight-lipped on what is happening, folks on various social media platforms noted the app's appearance, which seems to afflict both Windows 10 and Windows 11. The Windows Update mechanism is used to deploy third-party applications and drivers as well as Microsoft's updates, and we'd bet someone somewhere has accidentally checked the wrong box.

Up to now, the response from affected users has been one of confusion. One noted on Reddit: "I thought that was just me. I didn't install it, it just appeared on new apps in start menu out of nowhere." Another said: "I just checked and I had it installed too. Checking the event log for the Microsoft Store shows that it installed earlier today, but I definitely did [not] request or initiate it because I do not have any devices from HP." And, of course, there was the inevitable: "Would it be that hard for Microsoft to just provide an operating system without needless bloat?" To be clear, not all users are affected.
Privacy

Dollar Tree Hit By Third-Party Data Breach Impacting 2 Million People (bleepingcomputer.com) 16

Dollar Tree was impacted by a third-party data breach stemming from the hack of service provider Zeroed-In Technologies. According to Bleeping Computer, nearly two million customers have been affected. "The information stolen during the attack includes names, dates of birth, and Social Security numbers (SSNs)." From the report: According to a data breach notification shared with the Maine Attorney General, Dollar Tree's service provider, Zeroed-In, suffered a security incident between August 7 and 8, 2023. As part of this cyberattack, the threat actors managed to steal data containing the personal information of Dollar Tree and Family Dollar employees. "While the investigation was able to determine that these systems were accessed, it was not able to confirm all of the specific files that were accessed or taken by the unauthorized actor," reads the letter sent to affected individuals. "Therefore, Zeroed-In conducted a review of the contents of the systems to determine what information was present at the time of the incident and to whom the information relates."

The information stolen during the attack includes names, dates of birth, and Social Security numbers (SSNs). Zeroed-In has notified the affected individuals and enclosed instructions on enrolling in a twelve-month identity protection and credit monitoring service. Other Zeroed-In customers apart from Dollar Tree and Family Dollar may have also been impacted by the security breach, but this hasn't been confirmed yet. Meanwhile, the scale of the data breach has already triggered investigations from law firms looking into a potential class-action lawsuit against Zeroed-In.

Security

Okta Says Hackers Stole Data For All Customer Support Users (cnbc.com) 14

An anonymous reader quotes a report from CNBC: Hackers who compromised Okta's customer support system stole data from all of the cybersecurity firm's customer support users, Okta said in a letter to clients Tuesday, a far greater incursion than the company initially believed. The expanded scope opens those customers up to the risk of heightened attacks or phishing attempts, Okta warned. An Okta spokesperson told CNBC that customers in government or Department of Defense environments were not impacted by the breach. "We are working with a digital forensics firm to support our investigation and we will be sharing the report with customers upon completion. In addition, we will also notify individuals that have had their information downloaded," a spokesperson said in a statement to CNBC.

Nonetheless, Okta provides identity management solutions for thousands of small and large businesses, allowing them to give employees a single point of sign on. It also makes Okta a high-profile target for hackers, who can exploit vulnerabilities or misconfigurations to gain access to a slew of other targets. In the high profile attacks on MGM and Caesars, for example, threat actors used social engineering tactics to exploit IT help desks and target those company's Okta platforms. The direct and indirect losses from those two incidents exceeded $100 million, including a multi-million dollar ransom payment from Caesars.

AI

Sports Illustrated Published Articles by Fake, AI-Generated Writers (futurism.com) 45

Futurism has accused Sports Illustrated of publishing AI-generated articles under fake author biographies. The magazine has since removed the articles in question and released a statement blaming the issue on a contractor. From the report: There was nothing in Drew Ortiz's author biography at Sports Illustrated to suggest that he was anything other than human. "Drew has spent much of his life outdoors, and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature," it read. "Nowadays, there is rarely a weekend that goes by where Drew isn't out camping, hiking, or just back on his parents' farm." The only problem? Outside of Sports Illustrated, Drew Ortiz doesn't seem to exist. He has no social media presence and no publishing history. And even more strangely, his profile photo on Sports Illustrated is for sale on a website that sells AI-generated headshots, where he's described as "neutral white young-adult male with short brown hair and blue eyes."

Ortiz isn't the only AI-generated author published by Sports Illustrated, according to a person involved with the creation of the content who asked to be kept anonymous to protect them from professional repercussions. "There's a lot," they told us of the fake authors. "I was like, what are they? This is ridiculous. This person does not exist." "At the bottom [of the page] there would be a photo of a person and some fake description of them like, 'oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.' Stuff like that," they continued. "It's just crazy."

According to a second person involved in the creation of the Sports Illustrated content who also asked to be kept anonymous, that's because it's not just the authors' headshots that are AI-generated. At least some of the articles themselves, they said, were churned out using AI as well. "The content is absolutely AI-generated," the second source said, "no matter how much they say that it's not." After we reached out with questions to the magazine's publisher, The Arena Group, all the AI-generated authors disappeared from Sports Illustrated's site without explanation. [...] Though Sports Illustrated's AI-generated authors and their articles disappeared after we asked about them, similar operations appear to be alive and well elsewhere in The Arena Group's portfolio.
An Arena Group spokesperson issued the following statement blaming a contractor for the content: "Today, an article was published alleging that Sports Illustrated published AI-generated articles. According to our initial investigation, this is not accurate. The articles in question were product reviews and were licensed content from an external, third-party company, AdVon Commerce. A number of AdVon's e-commerce articles ran on certain Arena websites. We continually monitor our partners and were in the midst of a review when these allegations were raised. AdVon has assured us that all of the articles in question were written and edited by humans. According to AdVon, their writers, editors, and researchers create and curate content and follow a policy that involves using both counter-plagiarism and counter-AI software on all content. However, we have learned that AdVon had writers use a pen or pseudo name in certain articles to protect author privacy -- actions we don't condone -- and we are removing the content while our internal investigation continues and have since ended the partnership."
Security

India's CERT Given Exemption From Right To Information Requests (theregister.com) 5

India's government has granted its Computer Emergency Response Team, CERT-In, immunity from Right To Information (RTI) requests, the nation's equivalent of the freedom of information queries in the US, UK, or Australia. From a report: Reasons for the exemption have not been explained, but The Register has reported on one case in which an RTI request embarrassed CERT-In. That case related to India's sudden decision, in April 2022, to require businesses of all sizes to report infosec incidents to CERT-in within six hours of detection. The rapid reporting requirement applied both to serious incidents like ransomware attacks, and less critical messes like the compromise of a social media account.

CERT-In justified the rules as necessary to defend the nation's cyberspace and gave just sixty days notice for implementation. The plan generated local and international criticism for being onerous and inconsistent with global reporting standards such as Europe's 72-hour deadline for notifying authorities of data breaches. The reporting requirements even applied to cloud operators, who were asked to report incidents on tenants' servers. Big Tech therefore opposed the plan.

The Internet

Internet Use Does Not Appear To Harm Mental Health, Oxford Study Finds (ft.com) 80

A study of more than 2 million people's internet use found no "smoking gun" for widespread harm to mental health from online activities such as browsing social media and gaming, despite widely claimed concerns that mobile apps can cause depression and anxiety. From a report: Researchers at the Oxford Internet Institute, who said their study was the largest of its kind, said they found no evidence to support "popular ideas that certain groups are more at risk" from the technology. However, Andrew Przybylski, professor at the institute -- part of the University of Oxford -- said that the data necessary to establish a causal connection was "absent" without more co-operation from tech companies. If apps do harm mental health, only the companies that build them have the user data that could prove it, he said.

"The best data we have available suggests that there is not a global link between these factors," said Przybylski, who carried out the study with Matti Vuorre, a professor at Tilburg University. Because the "stakes are so high" if online activity really did lead to mental health problems, any regulation aimed at addressing it should be based on much more "conclusive" evidence, he added. "Global Well-Being and Mental Health in the Internet Age" was published in the journal Clinical Psychological Science on Tuesday.
In their paper, Przybylski and Vuorre studied data on psychological wellbeing from 2.4 million people aged 15 to 89 in 168 countries between 2005 and 2022, which they contrasted with industry data about growth in internet subscriptions over that time, as well as tracking associations between mental health and internet adoption in 202 countries from 2000-19.
Facebook

Meta Designed Platforms To Get Children Addicted, Court Documents Allege (theguardian.com) 64

An anonymous reader quotes a report from The Guardian: Instagram and Facebook parent company Meta purposefully engineered its platforms to addict children and knowingly allowed underage users to hold accounts, according to a newly unsealed legal complaint. The complaint is a key part of a lawsuit filed against Meta by the attorneys general of 33 states in late October and was originally redacted. It alleges the social media company knew -- but never disclosed -- it had received millions of complaints about underage users on Instagram but only disabled a fraction of those accounts. The large number of underage users was an "open secret" at the company, the suit alleges, citing internal company documents.

In one example, the lawsuit cites an internal email thread in which employees discuss why a 12-year-old girl's four accounts were not deleted following complaints from the girl's mother stating her daughter was 12 years old and requesting the accounts to be taken down. The employees concluded that "the accounts were ignored" in part because representatives of Meta "couldn't tell for sure the user was underage." The complaint said that in 2021, Meta received over 402,000 reports of under-13 users on Instagram but that 164,000 -- far fewer than half of the reported accounts -- were "disabled for potentially being under the age of 13" that year. The complaint noted that at times Meta has a backlog of up to 2.5m accounts of younger children awaiting action. The complaint alleges this and other incidents violate the Children's Online Privacy and Protection Act, which requires that social media companies provide notice and get parental consent before collecting data from children. The lawsuit also focuses on longstanding assertions that Meta knowingly created products that were addictive and harmful to children, brought into sharp focus by whistleblower Frances Haugen, who revealed that internal studies showed platforms like Instagram led children to anorexia-related content. Haugen also stated the company intentionally targets children under the age of 18.

Company documents cited in the complaint described several Meta officials acknowledging the company designed its products to exploit shortcomings in youthful psychology, including a May 2020 internal presentation called "teen fundamentals" which highlighted certain vulnerabilities of the young brain that could be exploited by product development. The presentation discussed teen brains' relative immaturity, and teenagers' tendency to be driven by "emotion, the intrigue of novelty and reward" and asked how these asked how these characteristics could "manifest ... in product usage." [...] One Facebook safety executive alluded to the possibility that cracking down on younger users might hurt the company's business in a 2019 email. But a year later, the same executive expressed frustration that while Facebook readily studied the usage of underage users for business reasons, it didn't show the same enthusiasm for ways to identify younger kids and remove them from its platforms.

Privacy

Plex Users Fear New Feature Will Leak Porn Habits To Their Friends and Family (404media.co) 120

Many Plex users were alarmed when they got a "week in review" email last week that showed them what they and their friends had watched on the popular media server software. From a report: Some users are saying that their friends' softcore porn habits are being revealed to them with the feature, while others are horrified by the potentially invasive nature feature more broadly. Plex is a hybrid streaming service/self-hosted media server. In addition to offering content that Plex itself has licensed, the service allows users to essentially roll their own streaming service by making locally downloaded files available to stream over the internet to devices the server admin owns. You can also "friend" people on Plex and give them access to your own server.

A new feature, called "Discover Together," expands social aspects of Plex and introduces an "Activity" tab: "See what your friends have watched, rated, added to their Watchlist, or shared with you," Plex notes. It also shares this activity in a "week in review" email that it sent to Plex users and people who have access to their servers.

Science

'There is a Scientific Fraud Epidemic' (ft.com) 148

Rooting out manipulation should not depend on dedicated amateurs who take personal legal risks for the greater good. From a story on Financial Times: As the Oxford university psychologist Dorothy Bishop has written, we only know about the ones who get caught. In her view, our "relaxed attitude" to the scientific fraud epidemic is a "disaster-in-waiting." The microbiologist Elisabeth Bik, a data sleuth who specialises in spotting suspect images, might argue the disaster is already here: her Patreon-funded work has resulted in over a thousand retractions and almost as many corrections. That work has been mostly done in Bik's spare time, amid hostility and threats of lawsuits. Instead of this ad hoc vigilantism, Bishop argues, there should be a proper police force, with an army of scientists specifically trained, perhaps through a masters degree, to protect research integrity.

It is a fine idea, if publishers and institutions can be persuaded to employ them (Spandidos, a biomedical publisher, has an in-house anti-fraud team). It could help to scupper the rise of the "paper mill," an estimated $1bn industry in which unscrupulous researchers can buy authorship on fake papers destined for peer-reviewed journals. China plays an outsize role in this nefarious practice, set up to feed a globally competitive "publish or perish" culture that rates academics according to how often they are published and cited. Peer reviewers, mostly unpaid, don't always spot the scam. And as the sheer volume of science piles up -- an estimated 3.7mn papers from China alone in 2021 -- the chances of being rumbled dwindle. Some researchers have been caught on social media asking to opportunistically add their names to existing papers, presumably in return for cash.

Facebook

Meta Knowingly Collected Data on Pre-Teens, Unredacted Evidence From Lawsuit Shows (msn.com) 56

The New York Times reports: Meta has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it "disabled only a fraction" of those accounts, according to a newly unsealed legal complaint against the company brought by the attorneys general of 33 states.

Instead, the social media giant "routinely continued to collect" children's personal information, like their locations and email addresses, without parental permission, in violation of a federal children's privacy law, according to the court filing. Meta could face hundreds of millions of dollars, or more, in civil penalties should the states prove the allegations. "Within the company, Meta's actual knowledge that millions of Instagram users are under the age of 13 is an open secret that is routinely documented, rigorously analyzed and confirmed," the complaint said, "and zealously protected from disclosure to the public...."

It also accused Meta executives of publicly stating in congressional testimony that the company's age-checking process was effective and that the company removed underage accounts when it learned of them — even as the executives knew there were millions of underage users on Instagram... The lawsuit argues that Meta elected not to build systems to effectively detect and exclude such underage users because it viewed children as a crucial demographic — the next generation of users — that the company needed to capture to assure continued growth.

More from the Wall Street Journal: An internal 2020 Meta presentation shows that the company sought to engineer its products to capitalize on the parts of youth psychology that render teens "predisposed to impulse, peer pressure, and potentially harmful risky behavior," the filings show... "Teens are insatiable when it comes to 'feel good' dopamine effects," the Meta presentation shows, according to the unredacted filing, describing the company's existing product as already well-suited to providing the sort of stimuli that trigger the potent neurotransmitter. "And every time one of our teen users finds something unexpected their brains deliver them a dopamine hit...."

"In December 2017, an Instagram employee indicated that Meta had a method to ascertain young users' ages but advised that 'you probably don't want to open this pandora's box' regarding age verification improvements," the states say in the suit. Some senior executives raised the possibility that cracking down on underage usage could hurt Meta's business... The states say Meta made little progress on automated detection systems or adequately staffing the team that reviewed user reports of underage activity. "Meta at times has a backlog of 2-2.5 million under-13 accounts awaiting action," according to the complaint...

The unredacted material also includes allegations that Meta Chief Executive Mark Zuckerberg instructed his subordinates to give priority to boosting its platforms' usage above the well being of users... Zuckerberg also repeatedly dismissed warnings from senior company officials that its flagship social-media platforms were harming young users, according to unsealed allegations in a lawsuit filed by Massachusetts earlier this month...

The complaint cites numerous other executives making public claims that were allegedly contradicted by internal documents. While Meta's head of global safety, Antigone Davis, told Congress that the company didn't consider profitability when designing products for teens, a 2018 internal email stated that product teams should keep in mind that "The lifetime value of a 13 y/o teen is roughly $270" when making product decisions.

Businesses

How to Support Local Retailers on 'Small Business Saturday' (nbcnews.com) 34

America celebrates "Small Business Saturday" today with special celebrations everywhere from Houston, Texas to Buffalo, New York

NBC News reports: Sandwiched between Black Friday and Cyber Monday — historically the biggest and busiest retail days of the year — there's another standout shopping event: Small Business Saturday. Started by American Express in 2010 and co-sponsored by the U.S. Small Business Administration since 2011, Small Business Saturday aims to create awareness about the impact shoppers have when they buy "small" year round, whether they physically visit stores or shop online.

This year, 85% of consumers say they're likely to shop "small" during the holiday season, according to the American Express 2023 Shop Small Impact Study. That represents a multibillion dollar opportunity — consumers are expected to spend an estimated $125 billion at small businesses this holiday season, up 42% from $88 billion in 2022, as reported by Intuit QuickBooks.

Like CBS News, NBC has compiled its list of small businesses that can ship their products to you — and suggests leaving positive reviews online for your favorite small businesses. ("Amazon, for example, now adds badges to product pages on its site if items are sold by small businesses.")
They also recommend interacting with your favorite small businesses on social media — while "the American Express small-business map allows you to input your zip code so it can recommend local shops in your area and beyond. Google also has a 'small business' filter on desktop and mobile, and one for Google Maps on mobile."

The UK's "Small Business Saturday" will happen next week, on the first Saturday in December.
It's funny.  Laugh.

Cards Against Humanity's Black Friday Prank: Launching Its Own Social Media Site (adage.com) 23

Long-time Slashdot reader destinyland writes: The popular party game "Cards Against Humanity" continued their tradition of practical jokes on Black Friday. They created a new social network where users can perform only one action: posting the word "yowza."

Then announced it on their official social media accounts on Instagram, Facebook, and X...

Regardless of what words you type into the window, they're replaced with the word yowza. "For just $0.99, you'll get an exclusive black check by your name," reads an announcement on the site, "and the ability to post a new word: awooga."

It's a magical land where "yowfluencers" keep "reyowzaing" the "yowzas" of other users. And there's also a tab for trending hashtags. (Although, yes, they all seem to be "yowza".) But they've already gotten a write up in the trade industry publication Advertising Age.

"With every bad thing happening in the world, social media is always right there, making it worse," a spokesperson said.... "[W]e asked ourselves: Is there a way we could make a social network that doesn't suck? At first, the answer was 'no.' The content moderation problem is just too hard. And then we thought, why not solve the content moderation problem by having no content? That's Yowza...."

When creating your profile on the network there's a dropdown menu for specifying your age and location — although all of the choices are yowza. More details from Advertising Age:

The company said the word "yowza" was the first that came to mind when its creative teams were brainstorming—and it just stuck. "It's dumb, it's ridiculous, it means nothing. It's perfect," the rep said.

And the service is still evolving, with fresh user upgrades. The official Yowza store will now also sell you the ability to also post the word Shazam — for $29.99. (Also on sale are 100,000 followers — for 99 cents.) But there's also an official FAQ which articulates the service's deep commitment to protecting their users' privacy.

Do you promise you won't share my private information with the Chinese Communist Party, like TikTok?

Yowza.

AI

Putin Says West Cannot Have AI Monopoly So Russia Must Up Its Game (reuters.com) 238

Russia President Vladimir Putin on Friday warned that the West should not be allowed to develop a monopoly in the sphere of AI, and said that a much more ambitious Russian strategy for the development of AI would be approved shortly. From a report: China and the United States are leading the development of AI, which many researchers and global leaders think will transform the world and revolutionise society in a way similar to the introduction of computers in the 20th century. Moscow has ambitions to be an AI power too, but its efforts have been set back due to the war in Ukraine which prompted many talented specialists to leave Russia and triggered Western sanctions that have hindered the country's high-tech imports.

Speaking to an AI conference in Moscow beside Sberbank CEO German Gref, Putin said that trying to ban AI was impossible despite the sometimes troubling ethical and social consequences of new technologies. "You cannot ban something - if we ban it then it will develop somewhere else and we will fall behind," Putin said of AI, though he said ethical questions should be resolved with reference to "traditional" Russian culture. Putin cautioned that some Western online search systems and generative models ignored or even cancelled Russian language and culture. Such Western algorithms, he said, essentially thought Russia did not exist. "Of course, the monopoly and domination of such systems, such alien systems is unacceptable and dangerous," he said.

AI

India Seeks To Regulate Deepfakes (techcrunch.com) 9

India is drafting rules to detect and limit the spread of deepfake content and other harmful AI media, a senior lawmaker said Thursday, following reports of proliferation of such content on social media platforms in recent weeks. From a report: Ashwini Vaishnaw, India's IT Minister, said the ministry held meetings with all large social media companies, industry body Nasscom and academics earlier in the day and has reached a consensus that a regulation is needed to better combat spread of deepfake videos as well as apps that facilitate their creations. "The companies share our concerns and they understood that it's [deepfakes] are not free speech. They understood that it's something that's very harmful to the society," he said. "They understood the need for much heavier regulation on this, so we agree that we will start drafting the regulation today itself." The ministry will be ready with "clear actionable items" on how to combat deepfakes in 10 days.
Businesses

Some Firms Are Demanding Steep Repayments If Staff Depart (nytimes.com) 154

At 26, nurse Benzor Vidal moved from the Philippines to America for work, but quit his unsafe, understaffed nursing home job after 14 weeks. His employment contract stipulated he could owe $20,000+ in damages if he resigned early. The New York Times Magazine digs deeper: This type of contract provision is known as a "stay or pay" clause, and it used to be common only for certain high-paying roles or in certain specialized industries. For airline pilots and software engineers, for example, it has been a longstanding practice at some companies to require employees to stay at their jobs for a defined period of time in order to recoup costs related to hiring and training. But the line between recouping costs and penalizing workers for leaving can be blurry, and companies have increasingly taken advantage of that ambiguity. Workers' rights advocates say that, in many cases, stay-or-pay clauses no longer accurately reflect the company's costs but instead appear to be inflated financial penalties designed to discourage quitting.

The use of stay-or-pay clauses has grown rapidly over the past decade, and it has seemingly exploded since the start of the pandemic, as companies try to retain workers in a tight labor market. The clauses have spread far beyond the handful of roles and industries where they originated and are now used by thousands of mid- and low-wage employers -- something that came to light when workers began filing lawsuits challenging the practice. These contract terms have been applied to bank workers, salespeople, dog groomers, police officers, aestheticians, firefighters, mechanics, nurses, federal employees, electricians, roofers, social workers, paramedics, truckers, mortgage brokers, teachers and metal polishers. Legal experts believe stay-or-pay clauses might now be in industries that employ a third of all American workers.

Slashdot Top Deals