×
AI

Sam Altman To Return as OpenAI CEO (reuters.com) 88

OpenAI said today it reached an agreement for Sam Altman to return as CEO days after his ouster, capping a marathon discussion about the future of the startup at the center of the artificial intelligence boom. From a report: In addition to Altman's return, the company agreed in principle to partly reconstitute the board of directors that had dismissed him. Former Salesforce co-CEO Bret Taylor and former U.S. Treasury Secretary Larry Summers will join Quora CEO and current director Adam D'Angelo, OpenAI said. Under an "agreement in principle," Altman will serve under the supervision of a new board of directors.

"I love OpenAI, and everything I've done over the past few days has been in service of keeping this team and its mission together," Altman wrote on the social media site X in response to the announcement. "When I decided to join Microsoft on Sunday evening, it was clear that was the best path for me and the team." Microsoft chief Satya Nadella hired Altman after he was sacked.

With the "support" of the new OpenAI board and Nadella, Altman said, he looked forward to "returning to OpenAI, and building on our strong partnership with Microsoft." Nadella said he was "encouraged by the changes to the OpenAI board" and believed that the decision was the "first essential step on a path to more stable, well-informed, and effective governance."
Music

Spotify To Phase Out Service In Uruguay Following New Copyright Bill (theguardian.com) 36

Laura Snapes reports via The Guardian: Spotify is to phase out its service in Uruguay after the passing of a new music copyright bill requiring "fair and equitable remuneration" for authors, composers, performers, directors and screenwriters. In October, the country's parliament voted on a budget bill that included two new articles: per article 284, social networks and the internet are to be added "as formats for which, if a song is reproduced, the performer is entitled to financial remuneration" -- namely if a link to a song is shared online. Article 285 will put into copyright law the "right to a fair and equitable remuneration" for all "agreements entered into by authors, composers, performers, directors and screenwriters with respect to their faculty of public communication and making available to the public of phonograms and audiovisual recordings."

In response, Spotify said in a statement on November 20 that without changes to the 2023 Rendicion de Cuentas law, the streaming platform "will, unfortunately, begin to phase out its service in Uruguay effective January 1, 2024" and cease trading in the market in February 2024. The Swedish company seeks confirmation on whether additional costs to be paid to musicians are the responsibility of rights holders or the streaming platforms, arguing that the latter means that it would be required "to pay twice for the same music," Music Business Worldwide reports. The statement continued: "Spotify already pays nearly 70% of every dollar it generates from music to the record labels and publishers that own the rights for music, and represent and pay artists and songwriters. Any additional payments would make our business untenable." The platform claimed that it had contributed to a 20% growth in Uruguay's music industry in 2022. That year, the South American nation was the 53rd largest market for music.

Medicine

A Viral Post on Social Media Will Clear the Medical Debt of Strangers (msn.com) 221

"To celebrate my life, I've arranged to buy up others' medical debt and then destroy the debt," reads a posthumous tweet posted Tuesday after the death of 38-year-old Casey McIntyre.

The Washington Post explains... McIntyre, who served as publisher at Razorbill, an imprint of Penguin Random House, was diagnosed in 2019 and proceeded through treatment without taking on debt, [husband Andrew Rose] Gregory told The Washington Post. But many fellow cancer patients she met were in more precarious financial positions, Gregory added. "We were both so keenly aware that Casey had great health insurance through Penguin Random House," said Gregory, 41. "Casey had no medical debt...."

About nine months before McIntyre died, her husband came across a video online about members of a North Carolina church who purchased nearly $3.3 million of local residents' medical debt for $15,048 in a "debt jubilee," a historical reference to ancient stories about personal debts being canceled at regular intervals. The couple chose to make monthly donations to RIP Medical Debt, the same organization that partnered with the North Carolina churchgoers. The nonprofit organization aims to abolish medical debt "at pennies on the dollar," according to its website. For every $100 donated, the company relieves $10,000 of medical debt. As of Saturday, nearly $200,000 had been donated to RIP Medical Debt in McIntyre's memory, which would wipe out approximately $20 million of unpaid medical bills. [By Sunday afternoon it had risen to over $334,000...]

Allison Sesso, president and chief executive of RIP Medical Debt, said her organization found out about McIntyre's case after McIntyre's posthumous social media post went viral. Sesso said the pace of donations was record-setting for her charity. "What an incredible gesture to the world as you're exiting," Sesso told The Post. "This final act of generosity is blowing up. The amount that they're raising and the rate at which this has gone is not something that we're used to."

McIntyre's post on X has now received 65,400 likes — and 3,086 reposts.
China

In World's Largest Disinformation Campaign Online, China Is Harassing Americans (cnn.com) 208

"The Chinese government has built up the world's largest known online disinformation operation," reports CNN, "and is using it to harass US residents, politicians, and businesses."

CNN reports that disinformation operation is even "at times threatening its targets with violence, a CNN review of court documents and public disclosures by social media companies has found." The onslaught of attacks — often of a vile and deeply personal nature — is part of a well-organized, increasingly brazen Chinese government intimidation campaign targeting people in the United States, documents show. The U.S. State Department says the tactics are part of a broader multi-billion-dollar effort to shape the world's information environment and silence critics of Beijing that has expanded under President Xi Jinping... Victims face a barrage of tens of thousands of social media posts that call them traitors, dogs, and racist and homophobic slurs.

They say it's all part of an effort to drive them into a state of constant fear and paranoia. Often, these victims don't know where to turn. Some have spoken to law enforcement, including the FBI — but little has been done. While tech and social media companies have shut down thousands of accounts targeting these victims, they're outpaced by a slew of new accounts emerging virtually every day. Known as "Spamouflage" or "Dragonbridge," the network's hundreds of thousands of accounts spread across every major social media platform have not only harassed Americans who have criticized the Chinese Communist Party, but have also sought to discredit U.S. politicians, disparage American companies at odds with China's interests and hijack online conversations around the globe that could portray the CCP in a negative light.

Some numbers from the article:
  • Meta "announced in August it had taken down a cluster of nearly 8,000 accounts attributed to this group in the second quarter of 2023 alone."
  • YouTube owner Google "told CNN it had shut down more than 100,000 associated accounts in recent years."
  • X "has blocked hundreds of thousands of China 'state-backed' or "state-linked" accounts, according to company blogs."

Science

Race Cannot Be Used To Predict Heart Disease, Scientists Say (nytimes.com) 97

Doctors have long relied on a few key patient characteristics to assess risk of a heart attack or stroke, using a calculus that considers blood pressure, cholesterol, smoking and diabetes status, as well as demographics: age, sex and race. Now, the American Heart Association is taking race out of the equation. From a report: The overhaul of the widely used cardiac-risk algorithm is an acknowledgment that, unlike sex or age, race identification in and of itself is not a biological risk factor. The scientists who modified the algorithm decided from the start that race itself did not belong in clinical tools used to guide medical decision making, even though race might serve as a proxy for certain social circumstances, genetic predispositions or environmental exposures that raise the risk of cardiovascular disease.

The revision comes amid rising concern about health equity and racial bias within the U.S. health care system, and is part of a broader trend toward removing race from a variety of clinical algorithms. "We should not be using race to inform whether someone gets a treatment or doesn't get a treatment," said Dr. Sadiya Khan, a preventive cardiologist at Northwestern University Feinberg School of Medicine, who chaired the statement writing committee for the American Heart Association, or A.H.A. The statement was published on Friday [PDF] in the association's journal, Circulation. An online calculator using the new algorithm, called PREVENT, is still in development.

Science

Why Superconductor Research is in a 'Golden Age' - Despite Controversy (nature.com) 5

Davide Castelvecchi, writing for Nature: A Nature retraction last week has put to rest the latest claim of room-temperature superconductivity -- in which researchers said they had made a material that could conduct electricity without producing waste heat and without refrigeration. The retraction follows the downfall of an even more brazen claim about a supposed superconductor called LK-99, which went viral on social media earlier this year. Despite these high-profile setbacks, superconductivity researchers say the field is enjoying somewhat of a renaissance. "It's not a dying field -- on the contrary," says Lilia Boeri, a physicist who specializes in computational predictions at the Sapienza University of Rome. The progress is fuelled in part by the new capabilities of computer simulations to predict the existence and properties of undiscovered materials.

Much of the excitement is focused on 'super-hydrides -- hydrogen-rich materials that have shown superconductivity at ever-higher temperatures, as long as they are kept at high pressure. The subject of the retracted Nature paper was purported to be such a material, made of hydrogen, lutetium and nitrogen. But work in the past few years has unearthed several families of materials that could have revolutionary properties. "It really does look like we're on the hairy edge of being able to find a lot of new superconductors," says Paul Canfield, a physicist at Iowa State University in Ames and Ames National Laboratory.

Businesses

Users Can't Speak To Viral AI Girlfriend CarynAI Because CEO Is in Jail (404media.co) 52

samleecole writes: People who paid to speak to an AI girlfriend modeled after real life 23-year-old influencer Caryn Marjorie are distraught because the service they paid for, Forever Companions, no longer works. It appears that the service stopped working shortly after Forever Companion CEO and founder John Meyer was arrested for trying to set his own apartment on fire.

404 Media tested CarynAI today as well as other AI bots and confirmed the service is not working. According to what we saw in the Telegram channel where Forever Companion users start conversations with CarynAI, the service has not been working since October 23. "I terminated my relationship with Forever Voices due to unforeseen circumstances," Marjorie told 404 Media in an email. "I wish the best for John Meyer and his family as he recovers from his mental health crisis. We didn't see this coming but I vow to push CarynAI forward for my fans and supporters." On October 30, Marjorie also announced that she's making a similar AI companion, "CarynAI 2.0," with another company called Banter AI. On social media for the last few weeks, the official Forever Voices Twitter account has been posting bizarre videos and statements about the CIA, Donald Trump, and the FBI.

Technology

Proton Mail CEO Calls New Address Verification Feature 'Blockchain in a Very Pure Form' (fortune.com) 28

Proton Mail, the leading privacy-focused email service, is making its first foray into blockchain technology with Key Transparency, which will allow users to verify email addresses. From a report: In an interview with Fortune, CEO and founder Andy Yen made clear that although the new feature uses blockchain, the key technology behind crypto, Key Transparency isn't "some sketchy cryptocurrency" linked to an "exit scam." A student of cryptography, Yen added that the new feature is "blockchain in a very pure form," and it allows the platform to solve the thorny issue of ensuring that every email address actually belongs to the person who's claiming it.

Proton Mail uses end-to-end encryption, a secure form of communication that ensures only the intended recipient can read the information. Senders encrypt an email using their intended recipient's public key -- a long string of letters and numbers -- which the recipient can then decrypt with their own private key. The issue, Yen said, is ensuring that the public key actually belongs to the intended recipient. "Maybe it's the NSA that has created a fake public key linked to you, and I'm somehow tricked into encrypting data with that public key," he told Fortune. In the security space, the tactic is known as a "man-in-the-middle attack," like a postal worker opening your bank statement to get your social security number and then resealing the envelope.

Blockchains are an immutable ledger, meaning any data initially entered onto them can't be altered. Yen realized that putting users' public keys on a blockchain would create a record ensuring those keys actually belonged to them -- and would be cross-referenced whenever other users send emails. "In order for the verification to be trusted, it needs to be public, and it needs to be unchanging," Yen said.

Space

FAA Clears SpaceX To Launch Second Starship Flight (cnbc.com) 19

The FAA has cleared SpaceX to launch its second spaceflight attempt of its Starship rocket. CNBC reports: SpaceX posted on the social media platform X shortly after the greenlight that it was "targeting Friday, November 17 for Starship's second flight test." A two-hour launch window will begin at 8 a.m. ET. SpaceX plans to livestream the Starship launch, with a webcast beginning about 30 minutes before lift off. Starship first launched in April, achieving flight for a few minutes before exploding mid-air, severely damaging the ground infrastructure and raising environmental concerns. The FAA in coordination with the U.S. Fish and Wildlife Service launched a safety review prior to issuing a new flight license for the second attempt.

FWS determined that the rocket launch and subsequent damage to the pad infrastructure had no long-term negative effects on the surrounding ecology, according to an agency report released Wednesday. Still, SpaceX will help mitigate damage to the area by reducing sound waves and vibrations, assisting in fire suppression, and providing launch pad protection, the agency said. As a result, "the FAA determined SpaceX met all safety, environmental, policy and financial responsibility requirements," the agency said in a statement Wednesday.

China

China Claims World's Fastest Internet With 1.2 Terabit-Per-Second Network (bloomberg.com) 45

Huawei and China Mobile have built a 3,000 kilometer (1,860-mile) internet network linking Beijing to the south, which the country is touting as its latest technological breakthrough. From a report: The two firms teamed up with Tsinghua University and research provider Cernet.com to build what they claim is the world's first internet network to achieve a "stable and reliable" bandwidth of 1.2 terabits per second, several times faster than typical speeds around the world. Trials began July 31 and it's since passed various tests verifying that milestone, the university said in a statement.

Tsinghua, Chinese President Xi Jinping's alma mater, is plugging the project as an industry-first built entirely on homegrown technology, and credits Huawei prominently in its statement. The Chinese firm in August made waves when it released a 5G smartphone with a sophisticated made-in-China processor, inspiring celebration in Chinese state and social media. That event also spurred debate in Washington about whether the Biden administration has gone far enough in attempts to contain Chinese technological achievement.

AI

White Faces Generated By AI Are More Convincing Than Photos, Finds Survey (theguardian.com) 70

Nicola Davis reports via The Guardian: A new study has found people are more likely to think pictures of white faces generated by AI are human than photographs of real individuals. "Remarkably, white AI faces can convincingly pass as more real than human faces -- and people do not realize they are being fooled," the researchers report. The team, which includes researchers from Australia, the UK and the Netherlands, said their findings had important implications in the real world, including in identity theft, with the possibility that people could end up being duped by digital impostors.

However, the team said the results did not hold for images of people of color, possibly because the algorithm used to generate AI faces was largely trained on images of white people. Dr Zak Witkower, a co-author of the research from the University of Amsterdam, said that could have ramifications for areas ranging from online therapy to robots. "It's going to produce more realistic situations for white faces than other race faces," he said. The team caution such a situation could also mean perceptions of race end up being confounded with perceptions of being "human," adding it could also perpetuate social biases, including in finding missing children, given this can depend on AI-generated faces.
The findings have been published in the journal Psychological Science.
The Courts

Social Media Giants Must Face Child Safety Lawsuits, Judge Rules (theverge.com) 53

Emma Roth reports via The Verge: Meta, ByteDance, Alphabet, and Snap must proceed with a lawsuit alleging their social platforms have adverse mental health effects on children, a federal court ruled on Tuesday. US District Judge Yvonne Gonzalez Rogers rejected the social media giants' motion to dismiss the dozens of lawsuits accusing the companies of running platforms "addictive" to kids. School districts across the US have filed suit against Meta, ByteDance, Alphabet, and Snap, alleging the companies cause physical and emotional harm to children. Meanwhile, 42 states sued Meta last month over claims Facebook and Instagram "profoundly altered the psychological and social realities of a generation of young Americans." This order addresses the individual suits and "over 140 actions" taken against the companies.

Tuesday's ruling states that the First Amendment and Section 230, which says online platforms shouldn't be treated as the publishers of third-party content, don't shield Facebook, Instagram, YouTube, TikTok, and Snapchat from all liability in this case. Judge Gonzalez Rogers notes many of the claims laid out by the plaintiffs don't "constitute free speech or expression," as they have to do with alleged "defects" on the platforms themselves. That includes having insufficient parental controls, no "robust" age verification systems, and a difficult account deletion process.

"Addressing these defects would not require that defendants change how or what speech they disseminate," Judge Gonzalez Rogers writes. "For example, parental notifications could plausibly empower parents to limit their children's access to the platform or discuss platform use with them." However, Judge Gonzalez Rogers still threw out some of the other "defects" identified by the plaintiffs because they're protected under Section 230, such as offering a beginning and end to a feed, recommending children's accounts to adults, the use of "addictive" algorithms, and not putting limits on the amount of time spent on the platforms.

Security

Healthcare Giant McLaren Reveals Data On 2.2 Million Patients Stolen During Ransomware Attack (techcrunch.com) 12

An anonymous reader quotes a report from TechCrunch: Michigan-based McLaren Health Care has confirmed that the sensitive personal and health information of 2.2 million patients was compromised during a cyberattack earlier this year. A ransomware gang later took credit for the cyberattack. In a new data breach notice filed with Maine's attorney general, McLaren said hackers were in its systems for three weeks during July 28 through August 23 before the healthcare company noticed a week later on August 31. McLaren said the hackers accessed patient names, their date of birth and Social Security number, and a wealth of medical information, including billing, claims and diagnosis information, prescription and medication details, and information relating to diagnostic results and treatments. Medicare and Medicaid patient information was also taken.

McLaren is a healthcare provider with 13 hospitals across Michigan and about 28,000 total employees. McLaren, whose website touts its cost efficiency measures, made over $6 billion in revenue in 2022. News of the incident broke in October when the Alphv ransomware gang (also known as BlackCat) claimed responsibility for the cyberattack, claiming it took millions of patients' personal information. Days after the cyberattack was disclosed, Michigan attorney general Dana Nessel warned state residents that the breach "could affect large numbers of patients." TechCrunch has seen several screenshots posted by the ransomware gang on its dark web leak site showing access to the company's password manager, internal financial statements, some employee information, and spreadsheets of patient-related personal and health information, including names, addresses, phone numbers, Social Security numbers, and diagnostic information. Alphv/BlackCat claimed in its post that the gang had been in contact with a McLaren representative, without providing evidence of the claim.

AI

Giant AI Platform Introduces 'Bounties' For Deepfakes of Real People (404media.co) 28

An anonymous reader quotes a report from 404 Media: Civitai, an online marketplace for sharing AI models that enables the creation of nonconsensual sexual images of real people, has introduced a new feature that allows users to post "bounties." These bounties allow users to ask the Civitai community to create AI models that generate images of specific styles, compositions, or specific real people, and reward the best AI model that does so with a virtual currency users can buy with real money. As is common on the site, many of the bounties posted to Civitai since the feature was launched are focused on recreating the likeness of celebrities and social media influencers, almost exclusively women. But 404 Media has seen at least one bounty for a private person who has no significant public online presence.

"I am very afraid of what this can become, for years I have been facing problems with the misuse of my image and this has certainly never crossed my mind," Michele Alves, an Instagram influencer who has a bounty on Civitai, told 404 Media. "I don't know what measures I could take, since the internet seems like a place out of control. The only thing I think about is how it could affect me mentally because this is beyond hurtful." The news shows how increasingly easy to use text-to-image AI tools, the ability to easily create AI models of specific people, and a platform that monetizes the production of nonconsensual sexual images is making it possible to generate nonconsensual images of anyone, not just celebrities.

The bounty of a real person that 404 Media saw on Civitai did not include a name, and included a handful of images that were taken from her social media accounts. 404 Media was able to find this person's online accounts and confirm they were not a celebrity or social media influencer, but just a regular person with personal social media accounts with few followers. The person who posted the bounty claimed that the woman he wanted an AI model of was his wife, though her Facebook account said she was single. Other Civitai users also weren't buying that explanation. Despite suspicions from these users, someone did complete the bounty and created an AI model of the woman that now any Civiai user can download. Several non-sexual AI generated images of her have been posted to the site.

Social Networks

Nepal To Ban TikTok (kathmandupost.com) 40

The Nepal government has decided to impose a ban on TikTok. From a report on the local newspaper Kathmandu Post: A Cabinet meeting on Monday took the decision to ban the Chinese-owned app, citing its negative effects on social harmony. However, when the decision will be brought into force is yet to be ascertained. Although freedom of expression is a basic right, a large section of society has criticised TikTok for encouraging a tendency of hate speech, the government said. In the past four years, 1,647 cases of cyber crime have been reported on the video sharing app.

The Cyber Bureau of the Nepal Police, Ministry of Home Affairs, and representatives of TikTok discussed the issue earlier last week. Monday's decision is expected to be enforced following the completion of technical preparations. The latest decision has come within days after the government introduced the 'Directives on the Operation of Social Networking 2023.' As per the new rule, social media platforms operating in Nepal required to set up their offices in the country.

Google

Google Fights Scammers Using Bard Hype To Spread Malware (theverge.com) 5

Google is suing scammers who are trying to use the hype around generative AI to trick people into downloading malware, the company has announced. From a report: In a lawsuit filed today in California, the company says individuals believed to be based in Vietnam are setting up social media pages and running ads encouraging users to "download" its generative AI service Bard. The download actually delivers malware to the victims, which steals social media credentials for the scammers to use. "Defendants are three individuals whose identities are unknown who claim to provide, among other things, 'the latest version' of Google Bard for download," the lawsuit reads.

"Defendants are not affiliated with Google in any way, though they pretend to be. They have used Google trademarks, including Google, Google AI, and Bard to lure unsuspecting victims into downloading malware onto their computers." The lawsuit notes that scammers have specifically used promoted Facebook posts in an attempt to distribute malware. Similar to crypto scams, the lawsuit highlights how interest in an emerging technology can be weaponized against people who may not fully understanding how it operates.

AI

AI-Generated Voice Deepfakes are Being Used in Scams (palmbeachpost.com) 19

Images and information from social media (and other online sources) are being used by AI to create "create convincing and personalized scam calls, texts and emails," writes the Palm Beach Post, citing a warning from Florida's consumer watchdog agency. In an older version of the scam, a caller would greet "Grandma" or "Grandpa" before saying, "It's me — I know I sound funny because I have a cold," and then make an urgent plea for money to get out of a scrap... Using audio and video clips found online, the con artist can clone the voice of a family member to make the call more compelling...

Listen for clues to a con like incorrect or mispronounced names or unfamiliar terms of endearment. The pressure to act quickly and to keep the call a secret are all timeless hallmarks of a scam, the agency notes. Detailed instructions on how to deliver funds in a form that is hard to recover — wired funds, a gift card or pay app — are also indications of a ripoff in the making.

The consumer watchdog agency suggests this precaution. "Encourage family members to set their social media pages to private."

Thanks to long-time Slashdot reader SonicSpike for sharing the article.
AI

Former President Obama Warns 'Disruptive' AI May Require Rethinking Jobs and the Economy (theverge.com) 151

This week the Verge's podcast Decoder interviewed former U.S. president Barack Obama for a discussion on "AI, free speech, and the future of the internet."

Obama warns that future copyright questions are just part of a larger issue. "If AI turns out to be as pervasive and as powerful as it's proponents expect — and I have to say the more I look into it, I think it is going to be that disruptive — we are going to have to think about not just intellectual property; we are going to have to think about jobs and the economy differently."

Specific issues may include the length of the work week and the fact that health insurance coverage is currently tied to employment — but it goes far beyond that: The broader question is going to be what happens when 10% of existing jobs now definitively can be done by some large language model or other variant of AI? And are we going to have to reexamine how we educate our kids and what jobs are going to be available...?

The truth of the matter is that during my presidency, there was I think a little bit of naivete, where people would say, you know, "The answer to lifting people out of poverty and making sure they have high enough wages is we're going to retrain them and we're going to educate them, and they should all become coders, because that's the future." Well, if AI's coding better than all but the very best coders? If ChatGPT can generate a research memo better than the third-, fourth-year associate — maybe not the partner, who's got a particular expertise or judgment? — now what are you telling young people coming up?

While Obama believes in the transformative potential of AI, "we have to be maybe a little more intentional about how our democracies interact with what is primarily being generated out of the private sector. What rules of the road are we setting up, and how can we make sure that we maximize the good and maybe minimize some of the bad?"

AI's impact will be a global problem, Obama believes, which may require "cross-border frameworks and standards and norms". (He expressed a hope that governments can educate the public on the idea that AI is "a tool, not a buddy".) During the 44-minute interview Obama predicted AI will ultimately force a "much more robust" public conversation about rules needed for social media — and that at least some of that pressure could come from how consumers interact with companies. (Obama also argues there will still be a market for products that don't just show you what you want to see.)

"One of Obama's worries is that the government needs insight and expertise to properly regulate AI," writes the Verge's editor-in-chief in an article about the interview, "and you'll hear him make a pitch for why people with that expertise should take a tour of duty in the government to make sure we get these things right." You'll hear me get excited about a case called Red Lion Broadcasting v. FCC, a 1969 Supreme Court decision that said the government could impose something called the Fairness Doctrine on radio and television broadcasters because the public owns the airwaves and can thus impose requirements on how they're used. There's no similar framework for cable TV or the internet, which don't use public airwaves, and that makes them much harder, if not impossible, to regulate. Obama says he disagrees with the idea that social networks are something called "common carriers" that have to distribute all information equally.
Obama also applauded last month's newly-issued Executive Order from the White House, a hundred-page document which Obama calls important as "the beginning of building out a framework." We don't know all the problems that are going to arise out of this. We don't know all the promising potential of AI, but we're starting to put together the foundations for what we hope will be a smart framework for dealing with it... In talking to the companies themselves, they will acknowledge that their safety protocols and their testing regimens may not be where they need to be yet. I think it's entirely appropriate for us to plant a flag and say, "All right, frontier companies, you need to disclose what your safety protocols are to make sure that we don't have rogue programs going off and hacking into our financial system," for example. Tell us what tests you're using. Make sure that we have some independent verification that right now this stuff is working.

But that framework can't be a fixed framework. These models are developing so quickly that oversight and any regulatory framework is going to have to be flexible, and it's going to have to be nimble.

Privacy

It's Still Too Easy for Anyone to 'Become You' at Experian (krebsonsecurity.com) 36

An anonymous reader shared this report from security research Brian Krebs: In the summer of 2022, KrebsOnSecurity documented the plight of several readers who had their accounts at big-three consumer credit reporting bureau Experian hijacked after identity thieves simply re-registered the accounts using a different email address. Sixteen months later, Experian clearly has not addressed this gaping lack of security. I know that because my account at Experian was recently hacked, and the only way I could recover access was by recreating the account...

The homepage said I needed to provide a Social Security number and mobile phone number, and that I'd soon receive a link that I should click to verify myself. The site claims that the phone number you provide will be used to help validate your identity. But it appears you could supply any phone number in the United States at this stage in the process, and Experian's website would not balk.

One user said they recreated their account this week — even though the phone number they'd input was a random number. "The only difference: it asked me FIVE questions about my personal history (last time it only asked three) before proclaiming, 'Welcome back, Pete!,' and granting full access," @PeteMayo wrote. "I feel silly saving my password for Experian; may as well just make a new account every time."

And Krebs points out that "Regardless, users can simply skip this step by selecting the option to 'Continue another way.'" Experian then asks for your full name, address, date of birth, Social Security number, email address and chosen password. After that, they require you to successfully answer between three to five multiple-choice security questions whose answers are very often based on public records. When I recreated my account this week, only two of the five questions pertained to my real information, and both of those questions concerned street addresses we've previously lived at — information that is just a Google search away...

Experian will send a message to the old email address tied to the account, saying certain aspects of the user profile have changed. But this message isn't a request seeking verification: It's just a notification from Experian that the account's user data has changed, and the original user is offered zero recourse here other than to a click a link to log in at Experian.com. And of course, a user who receives one of these notices will find that the credentials to their Experian account no longer work. Nor do their PIN or account recovery question, because those have been changed also. Your only option at this point is recreate your account at Experian and steal it back from the ID thieves!

Experian's security measures "are constantly evolving," insisted Experian spokesperson Scott Anderson — though Krebs remains unsatisfied. Anderson said all consumers have the option to activate a multi-factor authentication method that's requested each time they log in to their account. But what good is multi-factor authentication if someone can simply recreate your account with a new phone number and email address?
The Internet

Is India Setting a 'Global Standard' for Online Censorship of Social Media? (msn.com) 63

With 1.4 billion people, India is the second most-populous country in the world.

But a new article in the Washington Post alleges that India has "set a global standard for online censorship." For years, a committee of executives from U.S. technology companies and Indian officials convened every two weeks in a government office to negotiate what could — and could not — be said on Twitter, Facebook and YouTube. At the "69A meetings," as the secretive gatherings were informally called, officials from India's information, technology, security and intelligence agencies presented social media posts they wanted removed, citing threats to India's sovereignty and national security, executives and officials who were present recalled. The tech representatives sometimes pushed back in the name of free speech...

But two years ago, these interactions took a fateful turn. Where officials had once asked for a handful of tweets to be removed at each meeting, they now insisted that entire accounts be taken down, and numbers were running in the hundreds. Executives who refused the government's demands could now be jailed, their companies expelled from the Indian market. New regulations had been adopted that year to hold tech employees in India criminally liable for failing to comply with takedown requests, a provision that executives referred to as a "hostage provision." After authorities dispatched anti-terrorism police to Twitter's New Delhi office, Twitter whisked its top India executive out of the country, fearing his arrest, former company employees recounted.

Indian officials say they have accomplished something long overdue: strengthening national laws to bring disobedient foreign companies to heel... Digital and human rights advocates warn that India has perfected the use of regulations to stifle online dissent and already inspired governments in countries as varied as Nigeria and Myanmar to craft similar legal frameworks, at times with near-identical language. India's success in taming internet companies has set off "regulatory contagion" across the world, according to Prateek Waghre, a policy director at India's Internet Freedom Foundation...

Despite the huge size of China's market, companies like Twitter and Facebook were forced to steer clear of the country because Beijing's rules would have required them to spy on users. That left India as the largest potential growth market. Silicon Valley companies were already committed to doing business in India before the government began to tighten its regulations, and today say they have little choice but to obey if they want to remain there.

The Post spoke to Rajeev Chandrasekhar, the deputy technology minister in the BJP government who oversees many of the new regulations, who argued "The shift was really simple: We've defined the laws, defined the rules, and we have said there is zero tolerance to any noncompliance with the Indian law...

"You don't like the law? Don't operate in India," Chandrasekhar added. "There is very little wiggle room."

Slashdot Top Deals