×
Windows

Ask Slashdot: Should Production Networks Avoid Windows 11? 192

Slashdot reader John Smith 2294 is an IT consultant and system administrator "who started in the days of DEC VAX/VMS," now maintaining networks for small to medium businesses and non-profits. And they're sharing a concern with Slashdot.

"I object to Windows 11 insisting on an outlook.com / Microsoft Account OS login." Sure there are workarounds, but user action or updates can undo them. So I will not be using Windows 11 for science or business any more.... I will be using Win10 refurbs for as long as they are available, and then Mac Mini refurbs and Linux. My first Linux Mint user has been working happily for two months now and I have not heard a word from them.

So, as an IT Admin responsible for business or education networks of 20 users or more, will you be using Windows 11 on your networks or, like me, is this the end of the road for Windows for you too?

I'd thought their concern would be about Windows is sending user data to third parties. But are these really big enough reasons for system adminstrators to be avoiding Windows 11 altogether?

Share your thoughts and experiences in the comments. Should production networks avoid Windows 11?
Education

Internal Review Found 'Falsified Data' in Stanford President's Alzheimer's Research, Colleagues Allege (stanforddaily.com) 34

Stanford University president Marc Tessier-Lavigne was formerly executive vice president for research and chief scientific officer at biotech giant Genentech, according to his page on Wikipedia. "In 2022, Stanford University opened an investigation into allegations of Tessier-Lavigne's involvement in fabricating results in articles published between 2001 and 2008."

But Friday Stanford's student newspaper published even more allegations: In 2009, Marc Tessier-Lavigne, then a top executive at the biotechnology company Genentech, was the primary author of a scientific paper published in the prestigious journal Nature that claimed to have found the potential cause for brain degeneration in Alzheimer's patients. "Because of this research," read Genentech's annual letter to shareholders, "we are working to develop both antibodies and small molecules that may attack Alzheimer's from a novel entry point and help the millions of people who currently suffer from this devastating disease."

But after several unsuccessful attempts to reproduce the research, the paper became the subject of an internal review by Genentech's Research Review Committee (RRC), according to four high-level Genentech employees at the time... The scientists, one of whom was an executive who sat on the review committee and all of whom were informed of the review's findings at the time due to their stature at the company, said that the inquiry discovered falsification of data in the research, and that Tessier-Lavigne kept the finding from becoming public.

Tessier-Lavigne denies both allegations. Genentech said in a statement that "as part of our diligence related to these allegations, we reviewed the records from that November 2011 RRC meeting and saw no allegations of fraud or wrongdoing." The company acknowledged that "given that these events happened many years ago ... our current records may not be complete."

After the review, which began in 2011, Genentech canceled research based on the paper's findings. Till Maurer, a senior scientist at the company from 2009-2018 who said he was assigned to develop drugs based on the 2009 paper, told The Daily that his superior informed him that, in Maurer's words, "the project is being canceled and it's because they found falsified data...."

According to the executive who was part of the committee that reviewed the paper, the inquiry was thorough and left little room for doubt. Laboratory technicians and assistants were interviewed while scientists independent of the lab attempted to verify the findings of the study. "None of [the research review committee members] believed that these data were true by the time people had attempted to reproduce it," the executive said. He said that the understanding of the research committee was that the paper's supposed finding of N-APP's role in Alzheimer's had been "faked," and used "made up" figures as evidence.

Education

Steep Declines In Data Science Skills Among Fourth- and Eighth-Graders Across America, Study Finds (phys.org) 228

A new report (PDF) from the Data Science 4 Everyone coalition reveals that data literacy skills among fourth and eighth-grade students have declined significantly over the last decade even as these skills have become increasingly essential in our modern, data-driven society. Phys.Org reports: Based on data from the latest National Assessment of Educational Progress results, the report uncovered several trends that raise concerns about whether the nation's educational system is sufficiently preparing young people for a world reshaped by the rise of big data and artificial intelligence. Key findings include:

- The pandemic decline is part of a much longer-term trend. Between 2019 and 2022, scores in the data analysis, statistics, and probability section of the NAEP math exam fell by 10 points for eighth-graders and by four points for fourth-graders. Declining scores are part of a longer-term trend, with scores down 17 points for eighth-graders and down 10 points for fourth-graders over the last decade. That means today's eighth-graders have the data literacy of sixth-graders from a decade ago, and today's fourth-graders have the data literacy of third-graders from a decade ago.

- There are large racial gaps in scores. These gaps exist across all grade levels but are at times most dramatic in the middle and high school levels. For instance, fourth-grade Black students scored 28 points lower -- the equivalent of nearly three grade levels -- than their white peers in data analysis, statistics, and probability.

- Data-related instruction is in decline. Every state except Alabama reported a decline or stagnant trend in data-related instruction, with some states -- like Maryland and Iowa -- seeing double-digit drops. The national share of fourth-grade math teachers reporting "moderate" or "heavy" emphasis on data analysis dropped five percentage points between 2019 and 2022.

Piracy

Z-Library Returns, Offering 'Unique' Domain Name To All Users (torrentfreak.com) 51

An anonymous reader quotes a report from TorrentFreak: The U.S. Government's crackdown against Z-Library late last year aimed to wipe out the pirate library for good. The criminal prosecution caused disruption but didn't bring the site completely to its knees. Z-Library continued to operate on the dark web and this weekend, reappeared on the clearnet, offering a 'unique' domain name to all users. [...] Sites can often be seen hardening their operations to mitigate disruption caused by domain name seizures. Many have a list of backup domains that can be deployed when needed; The Pirate Bay infamously launched its hydra setup consisting of five different domain names. Z-Library is taking this hydra-inspired scheme to the next level. A new announcement reveals that the platform is publicly available once again and offering a unique and private domain name to every user.

"We have great news for you -- Z-Library is back on the Clearnet again! To access it, follow this link singlelogin.me and use your regular login credentials," the Z-Library team writes. "After logging into your account, you will be redirected to your personal domain. Please keep your personal domain private! Don't disclose your personal domain and don't share the link to your domain, as it is protected with your own password and cannot be accessed by other users." While we can't confirm that all users will get unique domain names, people are indeed redirected to different clearnet domains after logging in. After doing so, a popup message reminds them to keep their personal domain secret.

The domain names in question are subdomains of newly registered TLDs that rely on different domain name registries. Every user has two of these 'personal' domains listed on their personal profile page. If users can't access the universal login page, Z-Library says they can log in through TOR or I2P and get their personal clearnet domains there. How many new domain names Z-Library has is unclear but that's exactly the point. The site's operators want to prevent future domain name seizures and with the U.S. Government on its back, new domains are far from safe.

Power

Several US Universities Want to Use Micronuclear Reactors (apnews.com) 69

The University of Illinois plans to apply for a construction permit for a high-temperature, gas-cooled micronuclear reactor, reports the Associated Press, "and aims to start operating it by early 2028."

And they're not the only ones interested in the technology: Last year, Penn State University signed a memorandum of understanding with Westinghouse to collaborate on microreactor technology. Mike Shaqqo, the company's senior vice president for advanced reactor programs, said universities are going to be "one of our key early adopters for this technology." Penn State wants to prove the technology so that Appalachian industries, such as steel and cement manufacturers, may be able to use it, said Professor Jean Paul Allain, head of the nuclear engineering department. Those two industries tend to burn dirty fuels and have very high emissions....

"I do feel that microreactors can be a game-changer and revolutionize the way we think about energy," Allain said. For Allain, microreactors can complement renewable energy by providing a large amount of power without taking up much land. A 10-megawatt microreactor could go on less than an acre, whereas windmills or a solar farm would need far more space to produce 10 megawatts, he added. The goal is to have one at Penn State by the end of the decade....

Nuclear reactors that are used for research are nothing new on campus. About two dozen U.S. universities have them. But using them as an energy source is new.

Other examples from the article:
  • Purdue University in Indiana "is working with Duke Energy on the feasibility of using advanced nuclear energy to meet its long-term energy needs."
  • Abilene Christian University in Texas "is leading a group of three other universities with the company Natura Resources to design and build a research microreactor cooled by molten salt to allow for high temperature operations at low pressure, in part to help train the next generation nuclear workforce."

Biotech

Americans Are Ready To Test Embryos For Future College Chances, Survey Shows (technologyreview.com) 188

An anonymous reader quotes a report from MIT Technology Review: Imagine that you were provided no-cost fertility treatment and also offered a free DNA test to gauge which of those little IVF embryos floating in a dish stood the best chance of getting into a top college someday. Would you have the test performed? If you said yes, you're among about 40% percent of Americans who told pollsters they'd be more likely than not to test and pick IVF embryos for intellectual aptitude, despite hand-wringing by ethicists and gene scientists who think it's a bad idea. The opinion survey, published in the journal Science, was carried out by economists and other researchers who say surprisingly strong support for the embryo tests means the US might need to hurry up and set policies for the technology.

The new poll compared people's willingness to advance their children's prospects in three ways: using SAT prep courses, embryo tests, and gene editing on embryos. It found some support even for the most radical option, genetic modification of children, which is prohibited in the US and many other countries. About 28% of those polled said they'd probably do that if it was safe. The authors of the new poll are wrestling with the consequences of information that they helped discover via a series of ever larger studies to locate genetic causes of human social and cognitive traits, including sexual orientation and intelligence. That includes a report published last year on how the DNA differences among more than 3 million people related to how far they'd gone in school, a life result that is correlated with a person's intelligence.

The result of such research is a so-called "polygenic score," or a genetic test that can predict from genes whether -- among other things -- someone is going to be more or less likely to attend college. Of course, environmental factors matter plenty, and DNA is not destiny. Yet the gene tests are surprisingly predictive. In their poll, the researchers told people to assume that around 3% of kids will go to a top-100 college. By picking the one of 10 IVF embryos with the highest gene score, parents would increase that chance to 5% for their kid. It's tempting to dismiss the advantage gained as negligible, but "assuming they are right," Carmi says, it's actually "a very large relative increase" in the chance of going to such a school for the offspring in question -- about 67%.
"The current poll found only 6% of people are morally opposed to IVF today, only about 17% have strong moral qualms about testing embryos, and 38% would probably do to boost education prospects if given the opportunity," adds the report.
AI

Alibaba, Tencent and Baidu Join the ChatGPT Rush 11

China's biggest tech companies are rushing to develop their own versions of ChatGPT, the AI-powered chatbot that has set the U.S. tech world buzzing, despite questions over the capabilities and commercial prospects of the technology. Nikkei Asia Review reports: Alibaba Group Holding, Tencent Holdings, Baidu, NetEase and JD.com all unveiled plans this week to test and launch their own ChatGPT-like services in the near future, eager to show the results of their AI research efforts are just as ready for prime time as those of their U.S. counterparts. [...] Shares of Baidu surged to an 11-month high after the search giant on Monday revealed its plan to launch the ChatGPT-style "Ernie Bot," which is built on tech the company said has been in development since 2019. The company aims to complete internal testing in March before making the chatbot available to the public. Following Baidu's announcement, Alibaba said it is internally testing a ChatGPT-style tool, without revealing more details. The e-commerce conglomerate's shares closed up 3.96% in Hong Kong on Thursday. Tencent confirmed its plans in ChatGPT-style and AI-generated content on Thursday, saying relevant research is underway "in an orderly manner."

Online retailer JD.com said it plans to integrate some of the technologies that underpin applications like ChatGPT, such as natural language processing, in its own services. Gaming giant NetEase said it is researching the incorporation of AI-generated content into its education unit. Chinese media reported on Thursday that ByteDance's AI lab has launched certain research initiatives on technologies to support its virtual reality arm Pico. However, a person familiar with the matter at ByteDance told Nikkei that the report was false.
"Making use of AI-generated content is a natural thing," an unnamed executive from one of the leading listed Chinese tech companies told Nikkei. "Whenever there is a so-called next big thing, multiple companies will announce that they are in this area, but some companies may be just hyping with the catchword without any concrete product."

"Another challenge is China's heavy censorship of cyberspace, which will make AI-generated content difficult, too."
Education

The End of Grading (wired.com) 231

How the irrational mathematics of measuring, ranking, and rating distort the value of stuff, work, people -- everything. From a report: More irrational even than pi, assessing people amounts to quantifying a relationship between unknown, usually unknowable things. Every measurement, the mathematician Paul Lockhart reminds us in his book Measurement, is a comparison: "We are comparing the thing we are measuring to the thing we are measuring it with." What thing do we use to measure undergraduates? What aspects can be compared? Quality or quantity? Originality or effort? Participation or progress? Apples and oranges at best. Closer to bananas and elephants. Even quantitative tests mark, at most, a comparison between what the test-maker thought the student should know and the effectiveness of instruction. Grades become the permanent records of these passing encounters.

And how do we grade the grader? When a physicist friend found out that a first-year Harvard student he knew -- a math star in high school -- got an F in physics, he said: "Harvard should be ashamed of itself." A Harvard grad himself, he believed that schools fail students far more often than students fail schools. Some STEM profs, I'm told, tell the class at the outset that half of them will fail. I give that teacher an F. I'm not alone in my discomfort with the irrational business of ranking, rating, and grading. The deans of Yale's and Harvard's law schools recently removed themselves from the rankings of US News & World Report, followed by Harvard Medical School and scores of others. "Rankings cannot meaningfully reflect ... educational excellence," Harvard dean George O. Daley explained. Rankings lead schools to falsify data and make policies designed to raise rankings rather than "nobler objectives." The very thing that's been eating education is now devouring everything else. My doctor recently urged me to get an expensive diagnostic test because it "makes our numbers look good." Her nurse asked me to rank my pain on a totem pole of emojis. Then after the visit, to rate my experience. The numbers are all irrational. And rather like the never-ending digits of pi, there seems to be no end to them.

United States

America Failing To Prepare Gen Z To Enter the Workforce Due To 'Glaring' Gap in Tech Skills (fortune.com) 264

Computer classes for Gen Z aren't cutting it anymore. From a report: More than a third (37%) of Gen Zers feel their school education didn't prepare them with the digital skills they need to propel their career, according to Dell Technologies' international survey of more than 15,000 adults ages 18 to 26 across 15 countries. A majority (56%) of this generation added that they had very basic to no digital skills education. It's all led to some warranted skepticism regarding the future of work: Many Gen Zers are unsure what the digital economy will look like, and 33% have little to no confidence that the government's investments in a digital future will be successful in 10 years. Forty-four percent think that schools and businesses should work together to address the digital skills gap. The findings back up past research that found nearly half of the Class of 2022 felt the top skill they were underprepared for was technical skills.
Education

Students Lost One-Third of a School Year To Pandemic, Study Finds (nytimes.com) 71

Children experienced learning deficits during the Covid pandemic that amounted to about one-third of a school year's worth of knowledge and skills, according to a new global analysis, and had not recovered from those losses more than two years later. The New York Times reports: Learning delays and regressions were most severe in developing countries and among students from low-income backgrounds, researchers said, worsening existing disparities and threatening to follow children into higher education and the work force. The analysis, published Monday in the journal Nature Human Behavior and drawing on data from 15 countries, provided the most comprehensive account to date of the academic hardships wrought by the pandemic. The findings suggest that the challenges of remote learning -- coupled with other stressors that plagued children and families throughout the pandemic -- were not rectified when school doors reopened.

"In order to recover what was lost, we have to be doing more than just getting back to normal," said Bastian Betthauser, a researcher at the Center for Research on Social Inequalities at Sciences Po in Paris, who was a co-author on the review. He urged officials worldwide to provide intensive summer programs and tutoring initiatives that target poorer students who fell furthest behind. Thomas Kane, the faculty director of the Center for Education Policy Research at Harvard, who has studied school interruptions in the United States, reviewed the global analysis. Without immediate and aggressive intervention, he said, "learning loss will be the longest-lasting and most inequitable legacy of the pandemic."

[...] Because children have a finite capacity to absorb new material, Mr. Betthauser said, teachers cannot simply move faster or extend school hours, and traditional interventions like private tutoring rarely target the most disadvantaged groups. Without creative solutions, he said, the labor market ought to "brace for serious downstream effects." Children who were in school during the pandemic could lose about $70,000 in earnings over their lifetimes if the deficits aren't recovered, according to Eric Hanushek, an economist at the Hoover Institution at Stanford. In some states, pandemic-era students could ultimately earn almost 10 percent less than those who were educated just before the pandemic. The societal losses, he said, could amount to $28 trillion over the rest of the century.

Education

Why This Teacher Has Adopted an Open ChatGPT Policy (npr.org) 113

An anonymous reader quotes a report from NPR: Ethan Mollick has a message for the humans and the machines: can't we all just get along? After all, we are now officially in an A.I. world and we're going to have to share it, reasons the associate professor at the University of Pennsylvania's prestigious Wharton School. "This was a sudden change, right? There is a lot of good stuff that we are going to have to do differently, but I think we could solve the problems of how we teach people to write in a world with ChatGPT," Mollick told NPR. [...] This year, Mollick is not only allowing his students to use ChatGPT, they are required to. And he has formally adopted an A.I. policy into his syllabus for the first time.

He teaches classes in entrepreneurship and innovation, and said the early indications were the move was going great. "The truth is, I probably couldn't have stopped them even if I didn't require it," Mollick said. This week he ran a session where students were asked to come up with ideas for their class project. Almost everyone had ChatGPT running and were asking it to generate projects, and then they interrogated the bot's ideas with further prompts. "And the ideas so far are great, partially as a result of that set of interactions," Mollick said. He readily admits he alternates between enthusiasm and anxiety about how artificial intelligence can change assessments in the classroom, but he believes educators need to move with the times. "We taught people how to do math in a world with calculators," he said. Now the challenge is for educators to teach students how the world has changed again, and how they can adapt to that.

Mollick's new policy states that using A.I. is an "emerging skill"; that it can be wrong and students should check its results against other sources; and that they will be responsible for any errors or omissions provided by the tool. And, perhaps most importantly, students need to acknowledge when and how they have used it. "Failure to do so is in violation of academic honesty policies," the policy reads. [...] "I think everybody is cheating ... I mean, it's happening. So what I'm asking students to do is just be honest with me," he said. "Tell me what they use ChatGPT for, tell me what they used as prompts to get it to do what they want, and that's all I'm asking from them. We're in a world where this is happening, but now it's just going to be at an even grander scale." "I don't think human nature changes as a result of ChatGPT. I think capability did."

Education

Yale-Harvard Snub of US News Rankings Opens Way for More Exits (bloomberg.com) 35

First, Yale Law School. Now, Harvard Medical School. One by one, some of the nation's top graduate programs are quitting the great who's-up-who's-down scorecards of higher ed: US News & World Report's rankings. From a report: Harvard, No. 1 on the publication's latest medical-school list for research, joins a growing boycott of the most famous name in US college rankings. This week, the medical schools of Stanford University and the University of Pennsylvania announced they will no longer participate. Yale kicked off the movement in November, and was followed soon after by Harvard, Penn and Georgetown University law schools. The big question now is whether the movement will trickle down to undergraduate institutions. Critics of the rankings say their methodology is flawed and fail to represent the student experience, while supporters argue the lists are valuable guides for students. While this may put pressure on undergraduate colleges to reconsider their participation, those who study the rankings say the exodus might take some time.

Love 'em or hate 'em, they exert a powerful hold over institutions, students, parents and even recruiters. For some schools, sliding in the rankings can mean lost funding. Undergraduate schools have been tight-lipped about what happens next, although many admissions officers privately question the rankings' value. The criticism has been mounting for years. "I am convinced that the rankings game is a bit of mishegoss -- a slightly daft obsession that does harm when colleges, parents, or students take it too seriously," Princeton University President Christopher L. Eisgruber wrote in a 2021 op-ed in the Washington Post. In August, US Education Secretary Miguel Cardona called rankings "a joke."

Government

Member of Congress Reads AI-Generated Speech On House Floor (apnews.com) 48

U.S. Rep. Jake Auchincloss read a speech on the floor of the U.S. House that was generated by AI chatbot ChatGPT. "Auchincloss said he prompted the system in part to 'write 100 words to deliver on the floor of the House of Representatives' about the legislation," reports the Associated Press. "Auchincloss said he had to refine the prompt several times to produce the text he ultimately read. His staff said they believe it's the first time an AI-written speech was read in Congress." From the report: The bill, which Auchincloss is refiling, would establish a joint U.S.-Israel AI Center in the United States to serve as a hub for AI research and development in the public, private and education sectors. Auchincloss said part of the decision to read a ChatGPT-generated text was to help spur debate on AI and the challenges and opportunities created by it. He said he doesn't want to see a repeat of the advent of social media, which started small and ballooned faster than Congress could react. "I'm the youngest parent in the Democratic caucus, AI is going to be part of my life and it could be a general purpose technology for my children," said Auchincloss, 34.

The text generated from Auchincloss's prompt includes sentences like: "We must collaborate with international partners like the Israeli government to ensure that the United States maintains a leadership role in AI research and development and responsibly explores the many possibilities evolving technologies provide." "There were probably about a dozen of my colleagues on the floor. I bet none of them knew it was written by a computer," he said. Lawmakers and others shouldn't be reflexively hostile to the new technology, but also shouldn't wait too long before drafting policies or new laws to help regulate it, Auchincloss said. In particular, he argued that the country needs a "public counterweight" to the big tech firms that would help guarantee that smaller developers and universities have access to the same cloud computing, cutting edge algorithms and raw data as larger companies.

Earth

Africa Has Become 'Less Safe, Secure and Democratic' in Past Decade, Report Finds (theguardian.com) 84

Africa is less safe, secure and democratic than a decade ago, with insecurity holding back progress in health, education and economic opportunities, according to an assessment of the continent. From a report: The Ibrahim index of African governance, which examines how well governments have delivered on policies and services, including security, health, education, rights and democratic participation, said Covid had contributed to the stalling of progress over the past three years.

Mo Ibrahim, a Sudan-born businessman who launched the index in 2007, said economic opportunities and human development had improved "quite a lot" across Africa over the past 10 years. "But on the other hand, we see other forces pulling us back. The security and safety of our people is deteriorating," he said. Ibrahim said he was concerned the climate crisis would lead to more conflict over resources, as already seen in parts of Nigeria, Darfur and the Sahel, and worried about the impact of the war in Ukraine on development indicators across the continent.

AI

University of Texas Will Offer Large-Scale Online Master's Degree in AI (nytimes.com) 40

The University of Texas at Austin, one of the nation's leading computer science schools, said on Thursday that it was starting a large-scale, low-cost online Master of Science degree program in artificial intelligence. From a report: The first of its kind among elite computing schools, the new program could help swiftly expand the A.I. work force in the United States as tech giants like Microsoft rush to invest billions in the field.

The university announced the initiative amid a clamor over new technology powered by artificial intelligence that can generate humanlike art and texts. And while some of the technology industry's biggest companies are laying off workers after years of rapid growth, hiring in A.I. is expected to stay strong. University officials said they planned to train thousands of graduate students in sought-after skills like machine learning, for a tuition of about $10,000, starting in the spring of 2024. School officials said the cost was intended to make A.I. education more affordable. By contrast, Johns Hopkins University offers an online M.S. degree in artificial intelligence for more than $45,000.

AI

ChatGPT Passes MBA Exam Given By a Wharton Professor (nbcnews.com) 155

An anonymous reader quotes a report from NBC News: New research (PDF) conducted by a professor at University of Pennsylvania's Wharton School found that the artificial intelligence-driven chatbot GPT-3 was able to pass the final exam for the school's Master of Business Administration (MBA) program. Professor Christian Terwiesch, who authored the research paper "Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course," said that the bot scored between a B- and B on the exam.

The bot's score, Terwiesch wrote, shows its "remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants." The bot did an "amazing job at basic operations management and process analysis questions including those that are based on case studies," Terwiesch wrote in the paper, which was published on Jan. 17. He also said the bot's explanations were "excellent." The bot is also "remarkably good at modifying its answers in response to human hints," he concluded.

While Chat GPT3's results were impressive, Terwiesch noted that Chat GPT3 "at times makes surprising mistakes in relatively simple calculations at the level of 6th grade Math." The present version of Chat GPT is "not capable of handling more advanced process analysis questions, even when they are based on fairly standard templates," Terwiesch added. "This includes process flows with multiple products and problems with stochastic effects such as demand variability." Still, Terwiesch said ChatGPT3's performance on the test has "important implications for business school education, including the need for exam policies, curriculum design focusing on collaboration between human and AI, opportunities to simulate real world decision making processes, the need to teach creative problem solving, improved teaching productivity, and more."
The latest findings come as educators become increasingly concerned that AI chatbots like ChatGPT could inspire cheating. Earlier this month, New York City's education department banned access to ChatGPT. While the education department cited "safety and accuracy" as reasons for the decision, the Washington Post notes how some teachers are "in a near-panic" about the technology enabling students to cheat on assignments.

Yesterday, for example, The Stanford Daily reported that a large number of Stanford students have already used ChatGPT on their final exams. It's prompting anti-plagiarism software Turnitin to build a tool to detect text generated by AI.
Education

Anti-Plagiarism Service Turnitin Is Building a Tool To Detect ChatGPT-Written Essays 69

Turnitin, best known for its anti-plagiarism software used by tens of thousands of universities and schools around the world, is building a tool to detect text generated by AI. The Register reports: Turnitin has been quietly building the software for years ever since the release of GPT-3, Annie Chechitelli, chief product officer, told The Register. The rush to give educators the capability to identify text written by humans and computers has become more intense with the launch of its more powerful successor, ChatGPT. As AI continues to progress, universities and schools need to be able to protect academic integrity now more than ever. "Speed matters. We're hearing from teachers just give us something," Chechitelli said. Turnitin hopes to launch its software in the first half of this year. "It's going to be pretty basic detection at first, and then we'll throw out subsequent quick releases that will create a workflow that's more actionable for teachers." The plan is to make the prototype free for its existing customers as the company collects data and user feedback. "At the beginning, we really just want to help the industry and help educators get their legs under them and feel more confident. And to get as much usage as we can early on; that's important to make a successful tool. Later on, we'll determine how we're going to productize it," she said.

Turnitin's VP of AI, Eric Wang, said there are obvious patterns in AI writing that computers can detect. "Even though it feels human-like to us, [machines write using] a fundamentally different mechanism. It's picking the most probable word in the most probable location, and that's a very different way of constructing language [compared] to you and I," he told The Register. [...] ChatGPT, however, doesn't have this kind of flexibility and can only generate new words based on previous sentences, he explained. Turnitin's detector works by predicting what words AI is more likely to generate in a given text snippet. "It's very bland statistically. Humans don't tend to consistently use a high probability word in high probability places, but GPT-3 does so our detector really cues in on that," he said.

Wang said Turnitin's detector is based on the same architecture as GPT-3 and described it as a miniature version of the model. "We are in many ways I would [say] fighting fire with fire. There's a detector component attached to it instead of a generate component. So what it's doing is it's reading language in the exact same way GPT-3 reads language, but instead of spitting out more language, it gives us a prediction of whether we think this passage looks like [it's from] GPT-3." The company is still deciding how best to present its detector's results to teachers using the tool. "It's a difficult challenge. How do you tell an instructor in a small amount of space what they want to see?" Chechitelli said. They might want to see a percentage that shows how much of an essay seems to be AI-written, or they might want confidence levels showing whether the detector's prediction confidence is low, medium, or high to assess accuracy.
"I think there is a major shift in the way we create content and the way we work," Wang added. "Certainly that extends to the way we learn. We need to be thinking long term about how we teach. How do we learn in a world where this technology exists? I think there is no putting the genie back in the bottle. Any tool that gives visibility to the use of these technologies is going to be valuable because those are the foundational building blocks of trust and transparency."
AI

Scores of Stanford Students Used ChatGPT on Final Exams, Survey Suggests (stanforddaily.com) 108

The Stanford Daily: Stanford students and professors alike are grappling with the rise of ChatGPT, a chatbot powered by artificial intelligence, and the technology's implications for education. Some professors have already overhauled their courses in anticipation of how students might use the chatbot to complete assignments and exams. And according to an informal poll conducted by The Daily, a large number of students have already used ChatGPT on their final exams.

Whether the new technology will necessitate a revision of the Honor Code, the University's standards for academic integrity, remains to be seen: A University spokesperson confirmed that the Board of Judicial Affairs is aware of and monitoring these emerging tools. "Students are expected to complete coursework without unpermitted aid," wrote spokesperson Dee Mostofi. "In most courses, unpermitted aid includes AI tools like ChatGPT."

Software

The Lights Have Been On At a Massachusetts School For Over a Year Because No One Can Turn Them Off (nbcnews.com) 202

An anonymous reader quotes a report from NBC News: For nearly a year and a half, a Massachusetts high school has been lit up around the clock because the district can't turn off the roughly 7,000 lights in the sprawling building. The lighting system was installed at Minnechaug Regional High School when it was built over a decade ago and was intended to save money and energy. But ever since the software that runs it failed on Aug. 24, 2021, the lights in the Springfield suburbs school have been on continuously, costing taxpayers a small fortune.

"We are very much aware this is costing taxpayers a significant amount of money," Aaron Osborne, the assistant superintendent of finance at the Hampden-Wilbraham Regional School District, told NBC News. "And we have been doing everything we can to get this problem solved." Osborne said it's difficult to say how much money it's costing because during the pandemic and in its aftermath, energy costs have fluctuated wildly. "I would say the net impact is in the thousands of dollars per month on average, but not in the tens of thousands," Osborne said. That, in part, is because the high school uses highly efficient fluorescent and LED bulbs, he said. And, when possible, teachers have manually removed bulbs from fixtures in classrooms while staffers have shut off breakers not connected to the main system to douse some of the exterior lights.

But there's hope on the horizon that the lights at Minnechaug will soon be dimmed. Paul Mustone, president of the Reflex Lighting Group, said the parts they need to replace the system at the school have finally arrived from the factory in China and they expect to do the installation over the February break. "And yes, there will be a remote override switch so this won't happen again," said Mustone, whose company has been in business for more than 40 years.

Education

Tech-Backed Code.org Bringing BBC Micro:bit To US K-5 Classrooms 21

theodp writes: On Tuesday, the Micro:bit Educational Foundation, a UK-based education non-profit "on a mission to inspire all children to achieve their best digital future," announced a partnership with US-based and tech giant-backed nonprofit Code.org to offer teachers computing resources to complement use of the handheld BBC micro:bit physical computing device as an extension to the Code.org CS Fundamentals curriculum, which is aimed at introducing Computer Science to children in Kindergarten-5th Grade.

"Physical computing is a great way to engage students in computer science, and I'm excited that Code.org is expanding its offerings in this maker education space," said Code.org CEO Hadi Partovi. "We're delighted to partner with micro:bit to provide physical computing extensions to our existing courses." Micro:bit Educational Foundation CEO Gareth Stockdale added, "Growing a diverse pipeline of tech talent who contribute to the creation of better technology in the world begins in the classroom. We are invested in excellence in computer science education for younger students and are excited by the size of the impact we can create together with Code.org to bring the benefits of physical computing to young learners."

Back in 2015, Microsoft -- a Founding Partner of both the Micro:bit Educational Foundation and Code.org -- partnered with the BBC to provide an estimated 1 million free BBC micro:bits to every 11 or 12 year old in the UK. "The chance to influence the lives of a million children does not come often," Microsoft Research wrote in a 2016 paper explaining the efforts to get the micro:bit into the hands of UK schoolchildren and make it part of the CS curriculum. The paper also cited Code.org and the UK's Computing at School (a Micro:bit Educational Foundation partner that was "born at Microsoft Research Cambridge") as "two significant success at the coding level" of "scaling out an initiative to influence an entire country of students, or even globally."

Slashdot Top Deals