Men, Are You Loving Those AI Babes with Big Bits?
February 11, 2025
The dating scene has never been easy. It is apparently so difficult to find love these days that men are turning to digital love in the form of AI girlfriends. Vice News shares that “Most Men Would Marry Their AI Girlfriends If It Were Legal” and it is astounding the lengths men will go to for companionship.
EVA AI is a platform that allows people to connect with an AI partner. The platform recently surveyed 2000 men and discovered that 8 in 10 men would considered marrying their AI girlfriends if it was legal. It sounds like something out of the science fiction genre. The survey also found more startling news about men and AI girlfriends:
“Not only that, but 83% of men also believe they could form a deep emotional bond with an AI girlfriend. What’s even scarier is that a whopping 78% of men surveyed said they would consider creating a replica of their ex, and three-quarters would duplicate their current partner to create a “polished” version of them.”
Cale Jones, head of community growth at EVA AI, said that men find AI girlfriends to be safe and they are allowed to be their authentic selves. Jones continued that because AI girlfriends are safe, men feel free to share their thoughts, emotions, and desires. Continuing on the safety train of thought, Jones explained that individuals are also exploring their sexual identities without fear.
AI girlfriends and boyfriends are their own brand of creepiness. If the AI copies an ex-girlfriend or boyfriend, a movie star, or even a random person, it creates many psychological and potentially dangerous issues:
“I think what raises the most concern is the ability to replicate another person. That feels exploitative and even dangerous in many ways. I mean, imagine some random dude created an AI girlfriend based on your sister, daughter, or mother…then, picture them beginning to feel possessive over this person, forming actual feelings for the individual but channeling them into the robot. If they were to run into the actual human version of their AI girlfriend in real life, well…who knows what could/would happen? Ever heard of a crime of passion?
Of course, this is just a hypothetical, but it’s the first thing that came to mind. Many people already have issues feeling like they have a right to someone else’s body. Think about the number of celebrities who are harassed by superfans. Is this going to feed that issue even further, making it a problem for everyday people, like classmates, friends, and colleagues?”
Let’s remember that the men surveyed by EVA AI are probably a small sample of “men.” So far.
Whitney Grace, February 10, 2025
A Case for Export Controls in the Wake of Deepseek Kerfuffle
February 11, 2025
Some were shocked by recent revelations of Deepseek’s AI capabilities, including investors. Others had been forewarned about the (allegedly) adept firm. Interesting how social media was used to create the shock and awe that online information services picked up and endlessly repeated. Way to amplify the adversary’s propaganda.
At any rate, this escalating AI arms race is now top-of-mind for many. Could strong export controls give the US an edge? After all, China’s own chip manufacturing is said to lag about five years behind ours. Anthropic CEO Dario Amodei believes they can, as he explains in his post, "On Deepseek and Export Controls."
The AI maestro begins with some groundwork. First, he describes certain ways AI development scales and shifts. He then looks at what makes Deepseek so special—and what does not. See the post for those details, but here is the key point for our discussion: AI developers everywhere require more and more hardware to progress. So far, Chinese and US companies have had access to similar reserves of both funds and chips. However, if we limit the number of chips flowing into China, Chinese firms will eventually hit a proverbial wall. Amodei compares hypothetical futures:
"The question is whether China will also be able to get millions of chips. If they can, we’ll live in a bipolar world, where both the US and China have powerful AI models that will cause extremely rapid advances in science and technology — what I’ve called ‘countries of geniuses in a datacenter‘. A bipolar world would not necessarily be balanced indefinitely. Even if the US and China were at parity in AI systems, it seems likely that China could direct more talent, capital, and focus to military applications of the technology. Combined with its large industrial base and military-strategic advantages, this could help China take a commanding lead on the global stage, not just for AI but for everything."
How ominous. And if we successfully implement and enforce export controls? He continues:
"If China can’t get millions of chips, we’ll (at least temporarily) live in a unipolar world, where only the US and its allies have these models. It’s unclear whether the unipolar world will last, but there’s at least the possibility that, because AI systems can eventually help make even smarter AI systems, a temporary lead could be parlayed into a durable advantage. Thus, in this world, the US and its allies might take a commanding and long-lasting lead on the global stage."
"Might," he says. There is no certainty here. Still, an advantage like this may be worthwhile if it keeps China’s military from outstripping ours. Hindering an Anthropic competitor is just a side effect of this advice, right? Sure, in a peaceful world, international "competition and collaboration make the world a better place." But that is not our reality at the moment.
Amodei hastens to note he thinks the Deepseek folks are fine researchers and curious innovators. It is just that bit about being beholden to their authoritarian government that may be the issue.
Cynthia Murrell, February 11, 2025
Google Goes Googley in Paris Over AI … Again
February 10, 2025
Google does some interesting things in Paris. The City of Light was the scene of a Googler’s demonstration of its AI complete with hallucinations about two years ago. On Monday, February 10, 2025, Google’s “leadership” Sundar Pichai alleged leaked his speech or shared some memorable comments with journalists. These were reported in AAWSAT.com, an online information service in the story “AI Is ‘Biggest Shift of Our Lifetimes’, Says Google Boss.”
I like the shift; it reminds me of the word “shifty.”
One of the passages catching my attention was this one, although I am not sure of the accuracy of the version in the cited article. The gist seems on point with Google’s posture during Code Red and its subsequent reorganization of the firm’s smart software unit. The context, however, does not seem to include the impact of Deepseek’s bargain basement approach to AI. Google is into big money for big AI. One wins big in a horse race bet by plopping big bucks on a favorite nag. AI is doing the big bet on AI, about $75 billion in capital expenditures in the next 10 months.
Here’s the quote:
Artificial intelligence (AI) is a "fundamental rewiring of technology" that will act as an "accelerant of human ingenuity." We’re still in the early days of the AI platform shift, and yet we know it will be the biggest of our lifetimes… With AI, we have the chance to democratize access (to a new technology) from the start, and to ensure that the digital divide doesn’t become an AI divide….
The statement exudes confidence. With billions riding on Mr. Pichai gambler’s instinct, stakeholders and employees not terminated for cost savings hope he is correct. Those already terminated may be rooting for a different horse.
Google’s head of smart software (sorry, Jeff Dean) allegedly offered this sentiment:
“Material science, mathematics, fusion, there is almost no area of science that won’t benefit from these AI tools," the Nobel chemistry laureate said.
Are categorical statements part of the mental equipment that makes a Nobel prize winner. He did include an “almost,” but I think the hope is that many technical disciplines will reap the fruits of smart software. Some smart software may just reap fruits from users of smart software’s inputs.
A statement which I found more remarkable was:
Every generation worries that the new technology will change the lives of the next generation for the worse — and yet it’s almost always the opposite.
Another hedged categorical affirmative: “Almost always”. The only issue is that as Jacques Ellul asserted in The Technological Bluff, technology creates problems which invoke more technology to address old problems while simultaneously creating a new technology. I think Father Ellul was on the beam.
How about this for a concluding statement:
We must not let our own bias for the present get in the way of the future. We have a once-in-a-generation opportunity to improve lives at the scale of AI.
Scale. Isn’t that what Deepseek demonstrated may be a less logical approach to smart software? Paris has quite an impact on Google thought processes in my opinion. Did Google miss the Deepseek China foray? Did the company fail to interpret it in the context of wide adoption of AI? On the other hand, maybe if one does not talk about something, one can pretend that something does not exist. Like the Super Bowl ad with misinformation about cheese. Yes, cheese, again.
Stephen E Arnold, February 10, 2025
Microsoft, Deepseek, and OpenAI: An Interesting Mixture Like RDX?
February 10, 2025
We have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.
I have successfully installed Deepseek and run some queries. The results seem okay, but most of the large language models we have installed have their strengths and weaknesses. What’s interesting about Deepseek is that it caused a bit of a financial squall when it was publicized during a Chinese dignitary’s visit to Colombia.
A short time after a high flying video card company lost a few bucks, an expert advising the new US administration suggested “there’s substantial evidence that Deepseek used OpenAI’s models to train its own.” This story appeared X.com via Fox. Another report said that Microsoft was investigating Deepseek. When I checked my newsfeed this morning (January 30, 2025), Slashdot pointed me to this story: “Microsoft makes Deepseek’s R1 Model Available on Azure AI and GitHub.”
Did Microsoft do a speedy investigation or is the inclusion of Deepseek in AzureAI and GitHub part of its investigation. Did loading up Deepseek kill everyone’s favorite version of Office on January 29, 2024? Probably not, but there is a lot of action in the AI space at Microsoft Town.
Let’s recap the stuff from the AI chemistry lab. First, we have the fascinating Sam AI-Man. With a deal of note because Oracle is in and Grok is out, OpenAI remains a partner with Microsoft. Second, Microsoft, fresh from bumper revenues, continues to embrace AI and demonstrate that a welcome mat is outside Satya Nadella’s door for AI outfits. Third, who stole what? AI companies have been viewed as information bandits by some outfits. Legal eagles cloud the sunny future of smart software.
What will these chemical elements combine to deliver? Let’s consider a few options.
- Like RDX a go-to compound for some kinetic applications, the elements combust.
- The legal eagles effectively grind innovation to a halt due to restrictions on Nvidia, access to US open source software, and getting in the way of the reinvigoration of the USA.
- Nothing. That’s right. The status quo chugs along with predictable ups and downs but nothing changes.
Net net: This will be an interesting techno-drama to watch in real time. On the other hand, I may wait until the Slice outfit does a documentary about the dust up, partnerships, and failed bro-love affairs.
Stephen E Arnold, February 10, 2025
What Does One Do When Innovation Falters? Do the Me-Too Bop
February 10, 2025
Another dinobaby commentary. No smart software required.
I found the TechRadar story “In Surprise Move Microsoft Announces Deepseek R1 Is Coming to CoPilot+ PCs – Here’s How to Get It” an excellent example of bit tech innovation. The article states:
Microsoft has announced that, following the arrival of Deepseek R1 on Azure AI Foundry, you’ll soon be able to run an NPU-optimized version of Deepseek’s AI on your Copilot+ PC. This feature will roll out first to Qualcomm Snapdragon X machines, followed by Intel Core Ultra 200V laptops, and AMD AI chipsets.
Yep, me too, me too. The write up explains the ways in which one can use Deepseek, and I will leave taking that step to you. (On the other hand, navigate to Hugging Face and download it, or you could zip over to You.com and give it a try.)
The larger issue is not the speed with which Microsoft embraced the me too approach to innovation. For me, the decision illustrates the paucity of technical progress in one of the big technology giants. You know, Microsoft, the originator of Bob and the favorite software company of bad actors who advertise their malware on Telegram.
Several observations:
- It doesn’t matter how the Chinese start up nurtured by a venture capital firm got Deepseek to work. The Chinese outfit did it. Bang. The export controls and the myth of trillions of dollars to scale up disappeared. Poof.
- No US outfit — with or without US government support — was in the hockey rink when the Chinese team showed up and blasted a goal in the first few minutes of a global game. Buzz. 1 to zip. The question is, “Why not?” and “What’s happened since Microsoft triggered the crazy Code Red or whatever at the Google?” Answer: Burning money quickly.
- More pointedly, are the “innovations” in AI touted by Product Hunt and podcasters innovations? What if these are little more than wrappers with some snappy names? Answer: A reminder that technical training and some tactical kung fu can deliver a heck of a punch.
Net net: Deepseek was a tactical foray or probe. The data are in. Microsoft will install Chinese software in its global software empire. That’s interesting, and it underscores the problem of me to. Innovation takes more than raising prices and hiring a PR firm.
Stephen E Arnold, February 10, 2025
Deepseek: Details Surface Amid Soft Numbers
February 7, 2025
We have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.
I read “Research exposes Deepseek’s AI Training Cost Is Not $6M, It’s a Staggering $1.3B.” The assertions in the write up are interesting and closer to the actual cost of the Deepseek open source smart software. Let’s take a look at the allegedly accurate and verifiable information. Then I want to point out two costs not included in the estimated cost of Deepseek.
The article explains that the analysis for training was closer to $1.3 billion. I am not sure if this estimate is on the money, but a higher cost is certainly understandable based on the money burning activities of outfits like Microsoft, OpenAI, Facebook / Meta, and the Google, among others.
The article says:
In its latest report, SemiAnalysis, an independent research company, has spotlighted Deepseek, a rising player in the AI landscape. The SemiAnalysis challenges some of the prevailing narratives surrounding Deepseek’s costs and compares them to competing technologies in the market. One of the most prominent claims in circulation is that Deepseek V3 incurs a training cost of around $6 million.
One important point is that building and making available for free a smart software system incurs many costs. The consulting firm has narrowed its focus to training costs.
The write up reports:
The $6 million estimate primarily considers GPU pre-training expenses, neglecting the significant investments in research and development, infrastructure, and other essential costs accruing to the company. The report highlights that Deepseek’s total server capital expenditure (CapEx) amounts to an astonishing $1.3 billion. Much of this financial commitment is directed toward operating and maintaining its extensive GPU clusters, the backbone of its computational power.
But “astonishing.” Nope. Sam AI-Man tossed around numbers in the trillions. I am not sure we will ever know how much Amazon, Facebook, Google, and Microsoft — to name four outfits — have spent in the push to win the AI war, get a new monopoly, and control everything from baby cams to zebra protection in South Africa.
I do agree that the low ball number was low, but I think the pitch for this low ball was a tactic designed to see what a Chinese-backed AI product could do to the US financial markets.
There are some costs that neither the SemiAnalytics outfit or the Interesting Engineering wordsmith considered.
First, if you take a look at the authors of the Deepseek ArXiv papers you will see a lot of names. Most of these individuals are affiliated with Chinese universities. How we these costs handled? My hunch is that the costs were paid by the Chinese government and the authors of the paper did what was necessary to figure out how to come up with a “do more for less” system. The idea is that China, hampered by US export restrictions, is better at AI than the mythological Silicon Valley. Okay, that’s a good intelligence operation: Test destabilization with a reasonably believable free software gilded with AI sparklies. But the costs? Staff, overhead, and whatever perks go with being a wizard at a Chinese university have to be counted, multiplied by the time required to get the system to work mostly, and then included in the statement of accounts. These steps have not been taken, but a company named Complete Analytics should do the work.
Second, what was the cost of the social media campaign that made Deepseek more visible than the head referee of the Kansas City Chiefs and Philadelphia Eagle game? That cost has not been considered. Someone should grind through the posts, count the authors or their handles, and produce an estimate. As far as I know, there is no information about who is a paid promoter of Deepseek.
Third, how much did the electricity to get DeepSeek to do its tricks? We must not forget the power at the universities, the research labs, and the laptops. Technology Review has some thoughts along this power line.
Finally, what’s the cost of the overhead. I am thinking about the planning time, the lunches, the meetings, and the back and forth needed to get Deepseek on track to coincide with the new president’s push to make China not so great again? We have nothing. We need a firm called SpeculativeAnalytics for this task or maybe MasterCard can lend a hand?
Net net: The Deepseek operation worked. The recriminations, the allegations, and the explanations will begin. I am not sure they will have as much impact as this China smart, US dumb strategy. Plus, that SemiAnalytics’ name is a hoot.
Stephen E Arnold, February 7, 2025
China Smart, US Dumb: The Deepseek Foray into Destabilization of AI Investment
February 6, 2025
Yep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.
I have published a few blog posts about the Chinese information warfare directed at the US. Examples have included videos of a farm girl with primitive tools repairing complex machinery, the carpeting of ArXiv with papers about Deepseek’s AI innovations, and the stories in the South China Morning Post about assorted US technology issues.
Thanks You.com. Pretty good illustration.
Now the Deepseek foray is delivering fungible results. Numerous articles appeared on January 27, 2025, pegged to the impact of the Deepseek smart software on the US AI sector. A representative article is “China’s Deepseek Sparks AI Market Rout.”
The trusted real news outfit said:
Technology shares around the world slid on Monday as a surge in popularity of a Chinese discount artificial intelligence model shook investors’ faith in the AI sector’s voracious demand for high-tech chips. Startup Deepseek has rolled out a free assistant it says uses lower-cost chips and less data, seemingly challenging a widespread bet in financial markets that AI will drive demand along a supply chain from chipmakers to data centres.
Facebook ripped a page from the Google leadership team’s playbook. According to “Meta Scrambles After Chinese AI Equals Its Own, Upending Silicon Valley,” the Zuckerberg outfit assembled four “war rooms” to figure out how a Chinese open source AI could become such a big problem from out of the blue.
I find it difficult to believe that big US outfits were unaware of China’s interest in smart software. Furthermore, the Deepseek team made quite clear by listing dozens upon dozens of AI experts who contributed to the Deepseek effort. But who in US AI land has time to cross correlate the names of the researchers in the ArXiv essays to ask, “What are these folks doing to output cheaper AI models?”
Several observations are warranted:
- The effect of this foray has been to cause an immediate and direct concern about US AI firms’ ability to reduce costs. China allegedly has rolled out a good model at a lower price. Price competition comes in many forms. In this case, China can use less modern components to produce more modern AI. If you want to see how this works for basic equipment navigate to “Genius Girl Builds Amazing Hydroelectric Power Station For An Elderly Living Alone in the Mountains.” Deepseek is this information warfare tactic in the smart software space.
- The mechanism for the foray was open source. I have heard many times from some very smart people that open source is the future. Maybe that’s true. We now have an example of open source creating a credibility problem for established US big technology outfits who use open source to publicize how smart and good they are, prove they can do great work, and appear to be “community” minded. Deepseek just posted software that showed a small venture firm was able to do what US big technology has done at a fraction of the cost. Chinese business understands price and cost centric methods. This is the cost angle driven through the heart of scaling up solutions. Like giant US trucks, the approach is expensive and at some point will collapses of its own bloated framework.
- The foray has been broken into four parts: [a] The arXiv thrust, [b] the free and open source software thrust which begs the question, “What’s next from this venture firm?”, [c] the social media play with posts ballooning on BlueSky, Telegram, and Twitter, [d] the real journalism outfits like Bloomberg and Reuters yapping about AI innovation. The four-part thrust is effective.
China’s made the US approach to smart software look incredibly stupid. I don’t believe that a small group of hard workers at a venture firm cooked up the Deepseek method. The number of authors on the arXiv Deepseek papers make that clear.
With one deft, non kinetic, non militaristic foray, China has amplified doubt about US AI methods. The action has chopped big bucks from outfits like Nvidia. Plus China has combined its playbook for lower costs and better prices with information warfare. I am not sure that Silicon Valley type outfits have a response to roll out quickly. The foray has returned useful intelligence to China.
Net net: More AI will be coming to destabilize the Silicon Valley way.
Stephen E Arnold, February 6, 2025
Google and Job Security? What a Hoot
February 4, 2025
We have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.
Yesterday (January 30, 2025), one of the group mentioned that Google employees were circulating a YAP. I was not familiar with the word “yap”, so I asked, “What’s a yap?” The answer: It is yet another petition.
Here’s what I learned and then verified by a source no less pristine than NBC news. About a 1,000 employees want Google to assure the workers that they have “job security.” Yo, Googlers, when lawyers at the Department of Justice and other Federal workers lose their jobs between sips of their really lousy DoJ coffee, there is not much job security. Imagine professionals with sinecures now forced to offer some version of reality on LinkedIn. Get real.
The “real” news outfit reported:
Google employees have begun a petition for “job security” as they expect more layoffs by the company. The petition calls on Google CEO Sundar Pichai to offer buyouts before conducting layoffs and to guarantee severance to employees that do get laid off. The petition comes after new CFO Anat Ashkenazi said one of her top priorities would be to drive more cost cutting as Google expands its spending on artificial intelligence infrastructure in 2025.
I remember when Googlers talked about the rigorous screening process required to get a job. This was the unicorn like Google Labs Aptitude Test or GLAT. At one point, years ago, someone in the know gave me before a meeting the “test.” Here’s the first page of the document. (I think I received this from a Googler in 2004 or 2005 five:
If you can’t read this, here’s question 6:
One your first day at Google, you discover that your cubicle mate wrote the textbook you used as a primary resource in your first year of graduate school. Do you:
a) Fawn obsequiously and ask if you can have an aut0ograph
b) Sit perfectly still and use only soft keystrokes to avoid disturbing her concentration
c) Leave her daily offerings of granola and English toffee from the food bins
d) Quote your favorite formula from the text book and explain how it’s now your mantra
e) Show her how example 17b could have been solved with 34 fewer lines of code?
I have the full GLAT if you want to see it. Just write benkent2020 at yahoo dot com and we will find a way to provide the allegedly real document to you.
The good old days of Googley fun and self confidence are, it seems, gone. As a proxy for the old Google one has employees we have words like this:
“We, the undersigned Google workers from offices across the US and Canada, are concerned about instability at Google that impacts our ability to do high quality, impactful work,” the petition says. “Ongoing rounds of layoffs make us feel insecure about our jobs. The company is clearly in a strong financial position, making the loss of so many valuable colleagues without explanation hurt even more.”
I would suggest that the petition won’t change Google’s RIF. The company faces several challenges. One of the major ones is the near impossibility of paying for [a] indexing and updating the wonderful Google index, [b] spending money in order to beat the pants off the outfits which used Google’s transformer tricks, and [c] buy, hire, or coerce the really big time AI wizards to join the online advertising company instead of starting an outfit to create a wrapper for Deepseek and getting money from whoever will offer it.
Sorry, petitions are unlikely to move a former McKinsey big time blue chip consultant. Get real, Googler. By the way, you will soon be a proud Xoogler. Enjoy that distinction.
Stephen E Arnold, February 4, 2025
Google AI Product Names: Worse Than the Cheese Fixation
February 4, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
If you are Googley, you intuitively and instantly know what these products are:
Gemini Advanced 2.0 Flash
Gemini Advanced 2.0 Flash Thinking Experimental
2.0 Flash Thinking Experimental with apps
2.0 Pro Experimental
1.5 Pro
1.5 Flash
If you don’t get it, you write articles like this one: “You Only Need to See This Screenshot Once to Realize Why Gemini Needs to Follow ChatGPT in Making Its AI Products Less Confusing.” Follow ChatGPT, from the outfit OpenAI which is an open source and a non profit with a Chief Wizard who was fired and rehired more quickly than I can locate hallucinations in ChatGPT whatever. (With Google hallucinations, particularly in the cheese department, I know it is just a Sundar & Prabhakar joke.) With OpenAI, I am not quite sure of anything other than a successful (so far) end run around the masterful leader of X.com.
The write up says:
What we want is AI that just works, with simple naming conventions. If you look at the way Apple brands its products, it normally has up to three versions of a product with a simple name indicating the differences. It has two versions of its MacBook – the MacBook Air and MacBook Pro – and its latest iPhone – iPhone 16 and iPhone 16 Pro – that’s nice and simple.
Yeah, sure, Apple is the touchstone with indistinguishable iPhones, the M1, M2, M3, and M4 which are exactly understood as different by what percentage of the black turtleneck crowd?
Here’s a tip: These outfits are into marketing. Whether it is Apple designers influencing engineers or Google engineers influencing art history majors, neither company wants to do what courses in branding suggest; for example, consistency in naming and messaging and community engagement. I suppose confusion in social media and bafflement when trying to figure out what each black box large language model delivers other than acceptable high school essays and made up information is no big deal.
Word prediction is okay. Just a tip: Use the free services and read authoritative sources. Do some critical thinking. You may not be Googley, but you will be recognized as an individual who makes an honest effort to formulate useful outputs. Oh, you can label them experimental and flash to add some mystery to your old fashioned work, not “flash” work which is inconsistent, confusing, and sort of dumb in my opinion.
Stephen E Arnold, March 4, 2025
AI Smart, Humans Dumb When It Comes to Circuits
February 3, 2025
Anyone who knows much about machine learning knows we don’t really understand how AI comes to its conclusions. Nevertheless, computer scientists find algorithms do some things quite nicely. For example, ZME Science reports, "AI Designs Computer Chips We Can’t Understand—But They Work Really Well." A team from Princeton University and IIT Madras decided to flip the process of chip design. Traditionally, human engineers modify existing patterns to achieve desired results. The task is difficult and time-consuming. Instead, these researchers fed their AI the end requirements and told it to take it from there. They call this an "inverse design" method. The team says the resulting chips work great! They just don’t really know how or why. Writer Mihai Andrei explains:
"Whereas the previous method was bottom-up, the new approach is top-down. You start by thinking about what kind of properties you want and then figure out how you can do it. The researchers trained convolutional neural networks (CNNs) — a type of AI model — to understand the complex relationship between a circuit’s geometry and its electromagnetic behavior. These models can predict how a proposed design will perform, often operating on a completely different type of design than what we’re used to. … Perhaps the most exciting part is the new types of designs it came up with."
Yes, exciting. That is one word for it. Lead researcher Kaushik Sengupta notes:
"’We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance,’ says Sengupta. The designs were unintuitive and very different than those made by the human mind. Yet, they frequently offered significant improvements."
But at what cost? We may never know. It is bad enough that health care systems already use opaque algorithms, with all their flaws, to render life-and-death decisions. Just wait until these chips we cannot understand underpin those calculations. New world, new trade-offs for a world with dumb humans.
Cynthia Murrell, February 3, 2025