The AI Revealed: Look Inside That Kimono and Behind It. Eeew!
July 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The Guardian article “AI scientist Ray Kurzweil: ‘We Are Going to Expand Intelligence a Millionfold by 2045’” is quite interesting for what it does not do: Flip the projection output by a Googler hired by Larry Page himself in 2012.
Putting toothpaste back in a tube is easier than dealing with the uneven consequences of new technology. What if rosy descriptions of the future are just marketing and making darned sure the top one percent remain in the top one percent? Thanks Chat GPT4o. Good enough illustration.
First, a bit of math. Humans have been doing big tech for centuries. And where are we? We are post-Covid. We have homelessness. We have numerous armed conflicts. We have income inequality in the US and a few other countries I have visited. We have a handful of big tech companies in the AI game which want to be God to use Mark Zuckerberg’s quaint observation. We have processed food. We have TikTok. We have systems which delight and entertain each day because of bad actors’ malware, wild and crazy education, and hybrid work with the fascinating phenomenon of coffee badging; that is, going to the office, getting a coffee, and then heading to the gym.
Second, the distance in earth years between 2024 and 2045 is 21 years. In the humanoid world, a 20 year old today will be 41 when the prediction arrives. Is that a long time? Not for me. I am 80, and I hope I am out of here by then.
Third, let’s look at the assertions in the write up.
One of the notable statements in my opinion is this one:
I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
I like the quality of modesty and humblebrag. Googlers excel at both.
Another statement I circled is:
The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one.
I like the idea that the energy consumption required to deliver this merging will be cheap and plentiful. Googlers do not worry about a power failure, the collapse of a dam due to the ministrations of the US Army Corps of Engineers and time, or dealing with the environmental consequences of producing and moving energy from Point A to Point B. If Google doesn’t worry, I don’t.
Here’s a quote from the article allegedly made by Mr. Singularity aka Ray Kurzweil:
I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing.
I wonder if the Asilomar AI Principles are embedded in Google’s system recommending that one way to limit cheese on a pizza from sliding from the pizza to an undesirable location embraces these principles? Is the dispute between the “go fast” AI crowd and the “go slow” group not aware of the Asilomar AI Principles. If they are, perhaps the Principles are balderdash? Just asking, of course.
Okay, I think these points are sufficient for going back to my statements about processed food, wars, big companies in the AI game wanting to be “god” et al.
The trajectory of technology in the computer age has been a mixed bag of benefits and liabilities. In the next 21 years, will this report card with some As, some Bs, lots of Cs, some Ds, and the inevitable Fs be different? My view is that the winners with human expertise and the know how to make money will benefit. I think that the other humanoids may be in for a world of hurt. That’s the homelessness stuff, the being dumb when it comes to doing things like reading, writing, and arithmetic, and consuming chemicals or other “stuff” that parks the brain will persist.
The future of hooking the human to the cloud is perfect for some. Others may not have the resources to connect, a bit like farmers in North Dakota with no affordable or reliable Internet access. (Maybe Starlink-type services will rescue those with cash?)
Several observations are warranted:
- Technological “progress” has been and will continue to be a mixed bag. Sorry, Mr. Singularity. The top one percent surf on change. The other 99 percent are not slam dunk winners.
- The infrastructure issue is simply ignored, which is convenient. I mean if a person grew up with house servants, it is difficult to imagine not having people do what you tell them to do. (Could people without access find delight in becoming house servants to the one percent who thrive in 2045?)
- The extreme contention created by the deconstruction of shared values, norms, and conventions for social behavior is something that cannot be reconstructed with a cloud and human mind meld. Once toothpaste is out of the tube, one has a mess. One does not put the paste back in the tube. One blasts it away with a zap of Goo Gone. I wonder if that’s another omitted consequence of this super duper intelligence behavior: Get rid of those who don’t get with the program?
Net net: Googlers are a bit predictable when they predict the future. Oh, where’s the reference to online advertising?
Stephen E Arnold, July 9, 2024
A Signal That Money People Are Really Worried about AI Payoffs
July 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
“AI’s $600B Question” is an interesting signal. The subtitle for the article is the pitch that sent my signal processor crazy: The AI bubble is reaching a tipping point. Navigating what comes next will be essential.”
Executives on a thrill ride seem to be questioning the wisdom of hopping on the roller coaster. Thanks, MSFT Copilot. Good enough.
When money people output information that raises a question, something is happening. When the payoff is nailed, the financial types think about yachts, Bugatti’s, and getting quoted in the Financial Times. Doubts are raised because of these headline items: AI and $600 billion.
The write up says:
A huge amount of economic value is going to be created by AI. Company builders focused on delivering value to end users will be rewarded handsomely. We are living through what has the potential to be a generation-defining technology wave. Companies like Nvidia deserve enormous credit for the role they’ve played in enabling this transition, and are likely to play a critical role in the ecosystem for a long time to come. Speculative frenzies are part of technology, and so they are not something to be afraid of.
If I understand this money talk, a big time outfit is directly addressing fears that AI won’t generate enough cash to pay its bills and make the investors a bundle of money. If the AI frenzy was on the Money Train Express, why raise questions and provide information about the tough-to-control costs for making AI knock off the hallucination, the product recalls, the lawsuits, and the growing number of AI projects which just don’t work?
The fact of the article’s existence makes it clear to me that some folks are indeed worried. Does the write up reassure those with big bucks on the line? Does the write up encourage investors to pump more money into a new AI start up? Does the write up convert tests into long-term contracts with the big AI providers?
Nope, nope, and nope.
But here’s the unnerving part of the essay:
In reality, the road ahead is going to be a long one. It will have ups and downs. But almost certainly it will be worthwhile.
Translation: We will take your money and invest it. Just buckle up, butter cup. The ride on this roller coaster may end with the expensive cart hurtling from the track to the asphalt below. But don’t worry about us venture types. We will surf on churn and the flows of money. Others? Not so much.
Stephen E Arnold, July 8, 2024
Googzilla, Man Up, Please
July 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read a couple of “real” news stories about Google and its green earth / save the whales policies in the age of smart software. The first write up is okay and not to exciting for a critical thinker wearing dinoskin. “The Morning After: Google’s Greenhouse Gas Emissions Climbed Nearly 50 Percent in Five Years Due to AI” states what seems to be a PR-massaged write up. Consider this passage:
According to the report, Google said it expects its total greenhouse gas emissions to rise “before dropping toward our absolute emissions reduction target,” without explaining what would cause this drop.
Yep, no explanation. A PR win.
The BBC published “AI Drives 48% Increase in Google Emissions.” That write up states:
Google says about two thirds of its energy is derived from carbon-free sources.
Thanks, MSFT Copilot. Good enough.
Neither these two articles nor the others I scanned focused on one key fact about Google’s saying green and driving snail darters to their fate. Google’s leadership team did not plan its energy strategy. In fact, my hunch is that no one paid any attention to how much energy Google’s AI activities were sucking down. Once the company shifted into Code Red or whatever consulting term craziness it used to label its frenetic response to the Microsoft OpenAI tie up, absolutely zero attention was directed toward the few big eyed tunas which might be taking their last dip.
Several observations:
- PR speak and green talk are like many assurances emitted by the Google. Talk is not action.
- The management processes at Google are disconnected from what happens when the wonky Code Red light flashes and the siren howls at midnight. Shouldn’t management be connected when the Tapanuli Orangutang could soon be facing the Big Ape in the sky?
- The AI energy consumption is not a result of AI. The energy consumption is a result of Googlers who do what’s necessary to respond to smart software. Step on the gas. Yeah, go fast. Endanger the Amur leopard.
Net net: Hey, Google, stand up and say, “My leadership team is responsible for the energy we consume.” Don’t blame your up-in-flames “green” initiative on software you invented. How about less PR and more focus on engineering more efficient data center and cloud operations? I know PR talk is easier, but buckle up, butter cup.
Stephen E Arnold, July 8, 2024
AI: Hurtful and Unfair. Obviously, Yes
July 5, 2024
It will be years before AI is “smart” enough to entirely replace humans, but it’s in the immediate future. The problem with current AI is that they’re stupid. They don’t know how to do anything unless they’re trained on huge datasets. These datasets contain the hard, copyrighted, trademarked, proprietary, etc. work of individuals. These people don’t want their work used to train AI without their permission, much less replace them. Futurism shares that even AI engineers are worried about their creations, “Video Shows OpenAI Admitting It’s ‘Deeply Unfair’ To ‘Build AI And Take Everyone’s Job Away.”
The interview with an AI software engineer’s admission of guilt originally appeared in The Atlantic, but their morality is quickly covered by their apathy. Brian Wu is the engineer in question. He feels about making jobs obsolete, but he makes an observation that happens with progress and new technology: things change and that is inevitable:
“It won’t be all bad news, he suggests, because people will get to ‘think about what to do in a world where labor is obsolete.’
But as he goes on, Wu sounds more and more unconvinced by his own words, as if he’s already surrendered himself to the inevitability of this dystopian AI future.
‘I don’t know,’ he said. ‘Raise awareness, get governments to care, get other people to care.’ A long pause. ‘Yeah. Or join us and have one of the few remaining jobs. I don’t know. It’s rough.’”
Wu’s colleague Daniel Kokotajlo believes human will invent an all-knowing artificial general intelligence (AGI). The AGI will create wealth and it won’t be distributed evenly, but all humans will be rich. Kokotaljo then delves into the typical science-fiction story about a super AI becoming evil and turning against humanity. The AI engineers, however, aren’t concerned with the moral ambiguity of AI. They want to invent, continuing building wealth, and are hellbent on doing it no matter the consequences. It’s pure motivation but also narcissism and entitlement.
Whitney Grace, July 5, 2024
Smart Software and Knowledge Skills: Nothing to Worry About. Nothing.
July 5, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read an article in Bang Premier (an estimable online publication with which I had no prior knowledge). It is now a “fave of the week.” The story “University Researchers Reveal They Fooled Professors by Submitting AI Exam Answers” was one of those experimental results which caused me to chuckle. I like to keep track of sources of entertaining AI information.
A doctor and his surgical team used smart software to ace their medical training. Now a patient learns that the AI system does not have the information needed to perform life-saving surgery. Thanks, MSFT Copilot. Good enough.
The Bang Premier article reports:
Researchers at the University of Reading have revealed they successfully fooled their professors by submitting AI-generated exam answers. Their responses went totally undetected and outperformed those of real students, a new study has shown.
Is anyone surprised?
The write up noted:
Dr Peter Scarfe, an associate professor at Reading’s school of psychology and clinical language sciences, said about the AI exams study: “Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments. “We won’t necessarily go back fully to handwritten exams, but the global education sector will need to evolve in the face of AI.”
But the knee slapper is this statement in the write up:
In the study’s endnotes, the authors suggested they might have used AI to prepare and write the research. They stated: “Would you consider it ‘cheating’? If you did consider it ‘cheating’ but we denied using GPT-4 (or any other AI), how would you attempt to prove we were lying?” A spokesperson for Reading confirmed to The Guardian the study was “definitely done by humans”.
The researchers may not have used AI to create their report, but is it possible that some of the researchers thought about this approach?
Generative AI software seems to have hit a plateau for technology, financial, or training issues. Perhaps those who are trying to design a smart system to identify bogus images, machine-produced text and synthetic data, and nifty videos which often look like “real” TikTok-type creations will catch up? But if the AI innovators continue to refine their systems, the “AI identifier” software is effectively in a game of cat-and-mouse. Reacting to smart software means that existing identifiers will be blind to the new systems’ outputs.
The goal is a noble one, but the advantage goes to the AI companies, particularly those who want to go fast and break things. Academics get some benefit. New studies will be needed to determine how much fakery goes undetected. Will a surgeon who used AI to get his or her degree be able to handle a tricky operation and get the post-op drugs right?
Sure. No worries. Some might not think this is a laughing matter. Hey, it’s AI. It is A-Okay.
Stephen E Arnold, July 5, 2024
Microsoft Recall Continues to Concern UK Regulators
July 4, 2024
A “feature” of the upcoming Microsoft Copilot+, dubbed Recall, looks like a giant, built-in security risk. Many devices already harbor software that can hunt through one’s files, photos, emails, and browsing history. Recall intrudes further by also taking and storing a screenshot every few seconds. Wait, what? That is what the British Information Commissioner’s Office (ICO) is asking. The BBC reports, “UK Watchdog Looking into Microsoft AI Taking Screenshots.”
Microsoft asserts users have control and that the data Recall snags is protected. But the company’s pretty words are not enough to convince the ICO. The agency is grilling Microsoft about the details and will presumably update us when it knows more. Meanwhile, journalist Imran Rahman-Jones asked experts about Recall’s ramifications. He writes:
“Jen Caltrider, who leads a privacy team at Mozilla, suggested the plans meant someone who knew your password could now access your history in more detail. ‘[This includes] law enforcement court orders, or even from Microsoft if they change their mind about keeping all this content local and not using it for targeted advertising or training their AIs down the line,’ she said. According to Microsoft, Recall will not moderate or remove information from screenshots which contain passwords or financial account information. ‘That data may be in snapshots that are stored on your device, especially when sites do not follow standard internet protocols like cloaking password entry,’ said Ms. Caltrider. ‘I wouldn’t want to use a computer running Recall to do anything I wouldn’t do in front of a busload of strangers. ‘That means no more logging into financial accounts, looking up sensitive health information, asking embarrassing questions, or even looking up information about a domestic violence shelter, reproductive health clinic, or immigration lawyer.’”
Calling Recall a privacy nightmare, AI and privacy adviser Dr Kris Shrishak notes just knowing one’s device is constantly taking screenshots will have a chilling effect on users. Microsoft appears to have “pulled” the service. But data and privacy expert Daniel Tozer made a couple more points: How will a company feel if a worker’s Copilot snaps a picture of their proprietary or confidential information? Will anyone whose likeness appears in video chat or a photo be asked for consent before the screenshot is taken? Our guess—not unless it is forced to.
Cynthia Murrell, July 4, 2024
Satire or Marketing: Let Smart Software Decide
July 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
What’s PhD level intelligence? In 1962, I had a required class in one of the -ologies. I vaguely remember that my classmates and I had to learn about pigeons, rats, and people who would make decisions that struck me as off the wall. The professor was named after a Scottish family from the Highlands. I do recall looking up the name and finding that it meant “crooked nose.” But the nose, as nice as it was, was nothing to the bed springs the good professor suspended from a second story window. I asked him, “What’s the purpose of the bed springs?” (None of the other students in the class cared, but I found the sight interesting.) His reply was, “I am using it as an antenna.” Okay, that is one example of PhD-level intelligence. I have encountered others, but I will not regale you with are somewhat idiosyncratic behaviors.
The illustration demonstrates the common sense approach to problem solving. Thanks, MSFT Copilot. Chugging away on Recall and security over the holiday. Yep, I bet you are.
Why am I thinking about a story about bedsprings suspend from a second story window? I just read “ChatGPT Now Has PhD Level Intelligence, and the Poor Personal Choices to Prove It.” The write up states:
ChatGPT has become indispensable to plagiarists and spambots worldwide. Now, OpenAI is thrilled to introduce ChatGPT 5.0, the most advanced version of the popular virtual assistant to date. With groundbreaking improvements, GPT-5 is like having a doctor of philosophy right at your fingertips.
The write up (in a humorous vein I really hope) identifies some characteristics of the next big thing from Sam AI-Man, the Apple partner he hopes. The article suggests that ChatGPT 5 can cherish a saloon mug with this snappy phrase printed on the container:
Suck it up, liver.
No, I don’t want the mug, nor have I abused my liver as an undergraduate. I actually enjoyed spending time in the so-called university’s library.
The write up includes an short commentary about the ethical concerns or guardrails Sam AI-Man has on the Information Highway populated with smart vehicles and presumably less smart humans who are confident the zeros and ones don’t get scrambled:
Numerous questions have arisen regarding the ethics and legality of training ChatGPT on copyrighted text data without permission. In this latest version, however, reliance on authors’ intellectual property has been dramatically reduced. While GPT-5 started training from a knowledge base of millions of texts, it got around to reading only Frankenstein, plus maybe half of a Donna Haraway book. It basically bluffed its way through prelims by talking about “embodiment” a lot.
I get the Mary Shelly reference. But Donna Haraway forced me to ask Claude Sonnet 3.5, “Please, explain Donna Haraway’s contribution to ethical discourse.” Here’s what I received from the smart system competing with Sam AI-Man’s presumably smarter system:
Here’s Donna Haraway’s contribution to ethical discourse in three sentences:
- Donna Haraway challenged traditional ideas about what it means to be human by introducing the concept of a “cyborg” – a blend of human and machine.
- She argued that the lines between nature and culture, human and animal, and physical and non-physical are not as clear as we think, which forces us to rethink our ethical responsibilities.
- Her work encourages us to consider how technology and science affect our understanding of ourselves and our relationships with other living things, pushing us to develop new ethical frameworks for our changing world.
Thank you, Claude Sonnet 3.5! I have stated that my IQ score pegs me in the “Dumb Cod” percentile. I think Ms. Haraway is into the Ray Kurzweil and Elon Musk concept space. I know I am looking forward to nanodevices able to keep me alive for many, many years. I want to poke fun at smart software, and I quite like to think about PhD level software.
To close, I want to quote the alleged statement of a very smart person who could not remember if OpenAI used YouTube-type content to train ChatGPT. (Hey, even crooked nose remembered that he suspended the bed springs to function like an antenna.) The CTO of OpenAI allegedly said:
“If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence… and then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at PhD intelligence…” — Open AI CTO Mira Murati, in an interview with Dartmouth Engineering
I wonder if a person without a PhD can recognize “PhD intelligence”? Sure. Why not? It’s marketing.
Stephen E Arnold, July 3, 2024
Another Open Source AI Voice Speaks: Yo, Meta!
July 3, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The open source software versus closed source software demonstrates ebbs and flows. Like the “go fast” with AI and “go slow” with AI, strong opinions suggest that big money and power are swirling like the storms on a weather app for Oklahoma in tornado season. The most recent EF5 is captured in “Zuckerberg Disses Closed-Source AI Competitors As Trying to Create God.” The US government seems to be concerned about open source smart software finding its way into the hands of those who are not fans of George Washington-type thinking.
Which AI philosophy will win the big pile of money? Team Blue representing the Zuck? Or, the rag tag proprietary wizards? Thanks, MSFT Copilot. You are into proprietary, aren’t you?
The “move fast and break things” personage of Mark Zuckerberg is into open source smart software. In the write up, he allegedly said in a YouTube bit:
“I don’t think that AI technology is a thing that should be kind of hoarded and … that one company gets to use it to build whatever central, single product that they’re building,” Zuckerberg said in a new YouTube interview with Kane Sutter (@Kallaway).
The write up includes this passage:
In the conversation, Zuckerberg said there needs to be a lot of different AIs that get created to reflect people’s different interests.
One interesting item in the article, in my opinion, is this:
“You want to unlock and … unleash as many people as possible trying out different things,” he continued. “I mean, that’s what culture is, right? It’s not like one group of people getting to dictate everything for people.”
But the killer Meta vision is captured in this passage:
Zuckerberg said there will be three different products ahead of convergence: display-less smart glasses, a heads-up type of display and full holographic displays. Eventually, he said that instead of neural interfaces connected to their brain, people might one day wear a wristband that picks up signals from the brain communicating with their hand. This would allow them to communicate with the neural interface by barely moving their hand. Over time, it could allow people to type, too. Zuckerberg cautioned that these types of inputs and AI experiences may not immediately replace smartphones, though. “I don’t think, in the history of technology, the new platform — it usually doesn’t completely make it that people stop using the old thing. It’s just that you use it less,” he said.
In short, the mobile phone is going down, not tomorrow, but definitely to the junk drawer.
Several observations which I know you are panting to read:
- Never under estimate making something small or re-invented as a different form factor. The Zuck might be “right.”
- The idea of “unleash” is interesting. What happens if employees at WhatsApp unleash themselves? How will the Zuck construct react? Like the Google? Something new like blue chip consulting firms replacing people with smart software? “Unleash” can be interpreted in different ways, but I am thinking of turning loose a pack of hyenas. The Zuck may be thinking about eager kindergartners. Who knows?
- The Zuck’s position is different from the government officials who are moving toward restrictions on “free and open” smart software. Those hallucinating large language models can be repurposed into smart weapons. Close enough for horseshoes with enough RDX may do the job.
Net net: The Zuck is an influential and very powerful information channel owner. “Unleash” what? Hungry predators or those innovating children? Perhaps neither. But as OpenAI seems to be closing; the Zuck AI is into opening. Ah, uncertainty is unfolding before my eyes in real time.
Stephen E Arnold, July 3, 2024
x
x
Scale Is All You Need for Superintelligence: Interesting Assertion
July 2, 2024
This essay is the work of a dumb dinobaby. No smart software required. This blog post includes information from Claude 3.5 Sonnet. The information from smart software is in blue.
At this time, smart software boils down to the “transformer.” I navigated to Claude 3.5 Sonnet and asked the smart software to explain “transformer.” Here’s what the system generated:
The Google Transformer innovation, introduced in 2017, revolutionized natural language processing by introducing a novel architecture that relies entirely on attention mechanisms. This approach allows the model to process input sequences in parallel, rather than sequentially, leading to significant improvements in translation quality and training efficiency. The Transformer’s key features include self-attention, which enables it to capture relationships between words regardless of their position in a sentence, and positional encoding, which helps the model understand word order without relying on recurrent neural networks
I then asked, “Are there other ways to achieve smart software or AI information functions? Claud 3.5 Sonnet spit out this list:
- Machine Learning Algorithms
- Expert Systems
- Neural Networks.
Options are good. But the buzz focuses on transformers, a Google “invention” allegedly a decade old (but some suggest its roots reach back into the mists of time). But let’s stick with the Google and a decade.
The future is on the horizon. Thanks, MSFT Copilot. Good enough and you spelled “future” correctly.
“Etched Is Making the Biggest Bet in AI” That’s is an interesting statement. The company states what its chip is not:
By burning the transformer architecture into our chip, we can’t run most traditional AI models: the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2. We can’t run CNNs, RNNs, or LSTMs either. But for transformers, Sohu is the fastest chip of all time.
What does the chip do? The company says:
With over 500,000 tokens per second in Llama 70B throughput, Sohu lets you build products impossible on GPUs. Sohu is an order of magnitude faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs.
The company again points out the downside of its “bet the farm” approach:
Today, every state-of-the-art AI model is a transformer: ChatGPT, Sora, Gemini, Stable Diffusion 3, and more. If transformers are replaced by SSMs, RWKV, or any new architecture, our chips will be useless.
Yep, useless.
What is Etched’s big concept? The company says:
Scale is all you need for superintelligence.
This means in my dinobaby-impaired understanding that big delivers a really smarter smart software. Skip the power, pipes, and pings. Just scale everything. The company agrees:
By feeding AI models more compute and better data, they get smarter. Scale is the only trick that’s continued to work for decades, and every large AI company (Google, OpenAI / Microsoft, Anthropic / Amazon, etc.) is spending more than $100 billion over the next few years to keep scaling.
Because existing chips are “hitting a wall,” a number of companies are in the smart software chip business. The write up mentions 12 of them, and I am not sure the list is complete.
Etched is different. The company asserts:
No one has ever built an algorithm-specific AI chip (ASIC). Chip projects cost $50-100M and take years to bring to production. When we started, there was no market.
The company walks through the problems of existing chips and delivers it knock out punch:
But since Sohu only runs transformers, we only need to write software for transformers!
Reduced coding and an optimized chip: Superintelligence is in sight. Does the company want you to write a check? Nope. Here’s the wrap up for the essay:
What happens when real-time video, calls, agents, and search finally just work? Soon, you can find out. Please apply for early access to the Sohu Developer Cloud here. And if you’re excited about solving the compute crunch, we’d love to meet you. This is the most important problem of our time. Please apply for one of our open roles here.
What’s the timeline? I don’t know. What’s the cost of an Etched chip? I don’t know. What’s the infrastructure required. I don’t know. But superintelligence is almost here.
Stephen E Arnold, July 2, 2024
Is There a Problem with AI Detection Software?
July 1, 2024
Of course not.
But colleges and universities are struggling to contain AI-enabled cheating. Sadly, it seems the easiest solution is tragically flawed. Times Higher Education considers, “Is it Time to Turn Off AI Detectors?” The post shares a portion of the new book, “Teaching with AI: A Practical Guide to a New Era of Human Learning” by José Antonio Bowen and C. Edward Watson. The excerpt begins by looking at the problem:
“The University of Pennsylvania’s annual disciplinary report found a seven-fold (!) increase in cases of ‘unfair advantage over fellow students’, which included ‘using ChatGPT or Chegg’. But Quizlet reported that 73 per cent of students (of 1,000 students, aged 14 to 22 in June 2023) said that AI helped them ‘better understand material’. Watch almost any Grammarly ad (ubiquitous on TikTok) and ask first, if you think clicking on ‘get citation‘ or ‘paraphrase‘ is cheating. Second, do you think students might be confused?”
Probably. Some universities are not exactly clear on what is cheating and what is permitted usage of AI tools. At the same time, a recent study found 51 percent of students will keep using them even if they are banned. The boost to their GPAs is just too tempting. Schools’ urge to fight fire with fire is understandable, but detection tools are far from perfect. We learn:
“AI detectors are already having to revise claims. Turnitin initially claimed a 1 per cent false-positive rate but revised that to 4 per cent later in 2023. That was enough for many institutions, including Vanderbilt, Michigan State and others, to turn off Turnitin’s AI detection software, but not everyone followed their lead. Detectors vary considerably in their accuracy and rate of false positives. One study looked at 14 different detectors and found that five of the 14 were only 50 per cent accurate or worse, but four of them (CheckforAI, Winston AI, GPT-2 Output and Turnitin) missed only one of the 18 AI-written samples. Detectors are not all equal, but the best are better than faculty at identifying AI writing.”
But is that ability is worth the false positives? One percent may seem small, but to those students it can mean an end to their careers before they even begin. For institutions that do not want to risk false accusations, the authors suggest several alternatives that seem to make a difference. They advise instructors to discuss the importance of academic integrity at the beginning of the course and again as the semester progresses. Demonstrating how well detection tools work can also have an impact. Literally quizzing students on the school’s AI policies, definitions, and consequences can minimize accidental offenses. Schools could also afford students some wiggle room: allow them to withdraw submissions and take the zero if they have second thoughts. Finally, the authors suggest schools normalize asking for help. If students get stuck, they should feel they can turn to a human instead of AI.
Cynthia Murrell, July 1, 2024