Google Goes Nuclear For Data Centers

October 31, 2024

From the The Future-Is-Just-Around-the-Corner Department:

Pollution is blamed on consumers who are told to cut their dependency on plastic and drive less, while mega corporations and tech companies are the biggest polluters in the world. Some of the biggest users of energy are data centers and Google decided to go nuclear to help power them says Engadget: “Google Strikes A Deal With A Nuclear Startup To Power Its AI Data Centers.”

Google is teaming up with Kairos Power to build seven small nuclear reactors in the United States. The reactors will power Google’s AI Drive and add 500 megawatts. The first reactor is expected to be built in 2030 with the plan to finish the rest by 2035. The reactors are called small modular reactors or SMRs for short.

Google’s deal with Kairos Power would be the first corporate deal to buy nuclear power from SMRs. The small reactors are build inside a factory, instead of on site so their construction is lower than a full power plant.

“Kairos will need the US Nuclear Regulatory Commission to approve design and construction permits for the plans. The startup has already received approval for a demonstration reactor in Tennessee, with an online date targeted for 2027. The company already builds test units (without nuclear-fuel components) at a development facility in Albuquerque, NM, where it assesses components, systems and its supply chain.

The companies didn’t announce the financial details of the arrangement. Google says the deal’s structure will help to keep costs down and get the energy online sooner.”

These tech companies say they’re green but now they are contributing more to global warming with their AI data centers and potential nuclear waste. At least nuclear energy is more powerful and doesn’t contribute as much as coal or natural gas to pollution, except when the reactors melt down. Amazon is doing one too.

Has Google made the engineering shift from moon shots to environmental impact statements, nuclear waste disposal, document management, assorted personnel challenges? Sure, of course. Oh, and one trivial question: Is there a commercially available and certified miniature nuclear power plant? Russia may be short on cash. Perhaps someone in that country will sell a propulsion unit from those super reliable nuclear submarines? Google can just repurpose it in a suitable data center. Maybe one in Ashburn, Virginia?

Whitney Grace, October 31, 2024

An Emergent Behavior: The Big Tech DNA Proves It

October 14, 2024

Writer Mike Masnick at TechDirt makes quite the allegation: “Big Tech’s Promise Never to Block Access to Politically Embarrassing Content Apparently Only Applies to Democrats.” He contends:

“It probably will not shock you to find out that big tech’s promises to never again suppress embarrassing leaked content about a political figure came with a catch. Apparently, it only applies when that political figure is a Democrat. If it’s a Republican, then of course the content will be suppressed, and the GOP officials who demanded that big tech never ever again suppress such content will look the other way.”

The basis for Masnick’s charge of hypocrisy lies in a tale of two information leaks. Tech execs and members of Congress responded to each data breach very differently. Recently, representatives from both Meta and Google pledged to Senator Tom Cotton at a Senate Intelligence Committee hearing to never again “suppress” news as they supposedly did in 2020 with Hunter Biden laptop story. At the time, those platforms were leery of circulating that story until it could be confirmed.

Less than two weeks after that hearing, Journalist Ken Klippenstein published the Trump campaign’s internal vetting dossier on JD Vance, a document believed to have been hacked by Iran. That sounds like just the sort of newsworthy, if embarrassing, story that conservatives believe should never be suppressed, right? Not so fast—Trump mega-supporter Elon Musk immediately banned Ken’s X account and blocked all links to Klippenstein’s Substack. Similarly, Meta blocked links to the dossier across its platforms. That goes further than the company ever did with the Biden laptop story, the post reminds us. Finally, Google now prohibits users from storing the dossier on Google Drive. See the article for more of Masnick’s reasoning. He concludes:

“Of course, the hypocrisy will stand, because the GOP, which has spent years pointing to the Hunter Biden laptop story as their shining proof of ‘big tech bias’ (even though it was nothing of the sort), will immediately, and without any hint of shame or acknowledgment, insist that of course the Vance dossier must be blocked and it’s ludicrous to think otherwise. And thus, we see the real takeaway from all that working of the refs over the years: embarrassing stuff about Republicans must be suppressed, because it’s doxing or hacking or foreign interference. However, embarrassing stuff about Democrats must be shared, because any attempt to block it is election interference.”

Interesting. But not surprising.

Cynthia Murrell, October 14, 2024

AI: New Atlas Sees AI Headed in a New Direction

October 11, 2024

I like the premise of “AI Begins Its Ominous Split Away from Human Thinking.” Neural nets trained by humans on human information are going in their own direction. Whom do we thank? The neural net researchers? The Googlers who conceived of “the transformer”? The online advertisers who have provided significant sums of money? The “invisible hand” tapping on a virtual keyboard? Maybe quantum entanglement? I don’t know.

I do know that New Atlas’ article states:

AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.

But isn’t that the point? The high school science club types beavering away in the smart software vineyards know the catchphrase:

Boldly go where no man has gone before!

The big outfits able to buy fancy chips and try to start mothballed nuclear plants have “boldly go where no man has gone before.” Get in the way of one of these captains of the star ship US AI, and you will be terminated, harassed, or forced to quit. If you are not boldly going, you are just not going.

The article says ChatGPT 4 whatever is:

… the first LLM that’s really starting to create that strange, but super-effective AlphaGo-style ‘understanding’ of problem spaces. In the domains where it’s now surpassing Ph.D.-level capabilities and knowledge, it got there essentially by trial and error, by chancing upon the correct answers over millions of self-generated attempts, and by building up its own theories of what’s a useful reasoning step and what’s not.

But, hey, it is pretty clear where AI is going from New Atlas’ perch:

OpenAI’s o1 model might not look like a quantum leap forward, sitting there in GPT’s drab textual clothing, looking like just another invisible terminal typist. But it really is a step-change in the development of AI – and a fleeting glimpse into exactly how these alien machines will eventually overtake humans in every conceivable way.

But if the AI goes its own way, how can a human “conceive” where the software is going?

Doom and fear work for the evening news (or what passes for the evening news). I think there is a cottage industry of AI doomsters working diligently to stop some people from fooling around with smart software. That is not going to work. Plus, the magical “transformer” thing is a culmination of years of prior work. It is simply one more step in the more than 50 year effort to process content.

This “stage” seems to have some utility, but more innovations will come. They have to. I am not sure how one stops people with money hunting for people who can say, “I have the next big thing in AI.”

Sorry, New Atlas, I am not convinced. Plus, I don’t watch movies or buy into most AI wackiness.

Stephen E Arnold, October 11, 2024

Dolma: Another Large Language Model

October 9, 2024

The biggest complaint AI developers have are the lack of variety and diversity in large language models (LLMs) to train the algorithms. According to the Cornell University computer science paper, “Dolma: An Open Corpus Of There Trillion Tokens For Language Model Pretraining Research” the LLMs do exist.

The paper’s abstract details the difficulties of AI training very succinctly:

“Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training data impacts model capabilities and limitations.”

Due to the lack of LLMs, the paper’s team curated their own model called Dolma. Dolma is a three-trillion-token English opus. It was built on web content, public domain books, social media, encyclopedias code, scientific papers, and more. The team thoroughly documented every information source so they wouldn’t deal with the same problems of other LLMs. These problems include stealing copyrighted material and private user data.

Dolma’s documentation also includes how it was built, design principles, and content summaries. The team share Dolma’s development through analyses and experimental test results. They are thoroughly documenting everything to guarantee that this is the ultimate LLM and (hopefully) won’t encounter problems other than tech related. Dolma’s toolkit is open source and the team want developers to use it. This is a great effort on behalf of Dolma’s creators! They support AI development and data curation, but doing it responsibly.

Give them a huge round of applause!

Cynthia Murrell, October 10, 2024

Windows Fruit Loop Code, Oops. Boot Loop Code.

October 8, 2024

Windows Update Produces Boot Loops. Again.

Some Windows 11 users are vigilant about staying on top of the latest updates. Recently, such users paid for their diligence with infinite reboots, freezes, and/ or the dreaded blue screen of death. Digitaltrends warns, “Whatever You Do, Don’t Install the Windows 11 September Update.” Writer Judy Sanhz reports:

“The bug here can cause what’s known as a ‘boot loop.’ This is an issue that Windows versions have had for decades, where the PC will boot and restart endlessly with no way for users to interact, forcing a hard shutdown by holding the power button. Boot loops can be incredibly hard to diagnose and even more complicated to fix, so the fact that we know the latest Windows 11 update can trigger the problem already solves half the battle. The Automatic Repair tool is a built-in feature on your PC that automatically detects and fixes any issues that prevent your computer from booting correctly. However, recent Windows updates, including the September update, have introduced problems such as freezing the task manager and others in the Edge browser. If you’re experiencing these issues, our handy PC troubleshooting guide can help.”

So for many the update hobbled the means to fix it. Wonderful. It may be worthwhile to bookmark that troubleshooting guide. On multiple devices, if possible. Because this is not the first time Microsoft has unleased this particular aggravation on its users. In fact, the last instance was just this past August. The company has since issued a rollback fix, but one wonders: Why ship a problematic update in the first place? Was it not tested? And is it just us, or does this sound eerily similar to July’s CrowdStrike outage?

(Does the fruit loop experience come with sour grapes?)

Cynthia Murrell, October 8, 2024

Hey, Live to Be a 100 like a Tech Bro

October 8, 2024

If you, gentle reader, are like me, you have taken heart at tales of people around the world living past 100. Well, get ready to tamp down some of that hope. An interview at The Conversation declares, “The Data on Extreme Human Ageing Is Rotten from the Inside Out.” Researcher Saul Justin Newman recently won an Ig Nobel Prize (not to be confused with a Nobel Prize) for his work on data about ageing. When asked about his work, Newman summarizes:

“In general, the claims about how long people are living mostly don’t stack up. I’ve tracked down 80% of the people aged over 110 in the world (the other 20% are from countries you can’t meaningfully analyze). Of those, almost none have a birth certificate. In the US there are over 500 of these people; seven have a birth certificate. Even worse, only about 10% have a death certificate. The epitome of this is blue zones, which are regions where people supposedly reach age 100 at a remarkable rate. For almost 20 years, they have been marketed to the public. They’re the subject of tons of scientific work, a popular Netflix documentary, tons of cookbooks about things like the Mediterranean diet, and so on. Okinawa in Japan is one of these zones. There was a Japanese government review in 2010, which found that 82% of the people aged over 100 in Japan turned out to be dead. The secret to living to 110 was, don’t register your death.”

That is one way to go, we suppose. We learn of other places Newman found bad ageing data. Europe’s “blue zones” of Sardinia in Italy and Ikaria in Greece, for example. There can be several reasons for erroneous data. For example, wars or other disasters that destroyed public records. Or clerical errors that set the wrong birth years in stone. But one of the biggest factors seems to be pension fraud. We learn:

“Regions where people most often reach 100-110 years old are the ones where there’s the most pressure to commit pension fraud, and they also have the worst records. For example, the best place to reach 105 in England is Tower Hamlets. It has more 105-year-olds than all of the rich places in England put together. It’s closely followed by downtown Manchester, Liverpool and Hull. Yet these places have the lowest frequency of 90-year-olds and are rated by the UK as the worst places to be an old person.”

That does seem fishy. Especially since it is clear rich folks generally live longer than poor ones. (And that gap is growing, by the way.) So get those wills notarized, trusts set up, and farewell letters written sooner than later. We may not have as much time as we hoped.

Cynthia Murrell, October 8, 2024

DAIS: A New Attempt to Make AI Play Nicely with Humans

September 20, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How about a decentralized artificial intelligence “association”? One has been set up by Michael Casey, the former chief content officer at Coindesk. (Coindesk reports about the bright, sunny world of crypto currency and related topics.) I learned about this society in — you guessed it — Coindesk’s online information service called Coindesk. The article “Decentralized AI Society Launched to Fight Tech Giants Who ‘Own the Regulators’” is interesting. I like the idea that “tech giants” own the regulators. This is an observation which Apple and Google might not agree. Both “tech giants” have been facing some unfavorable regulatory decisions. If these regulators are “owned,” I think the “tech giants” need to exercise their leadership skills to make the annoying regulators go away. One resigned in the EU this week, but as Shakespeare said of lawyers, let’s drown them. So far the “tech giants” have been bumbling along, growing bigger as a result of feasting on data and amplifying allegedly monopolistic behaviors which just seem to pop up, rules or no rules.

image

Two experts look at what emerged from a Petri dish of technological goodies. Quite a surprise I assume. Thanks, MSFT Copilot. Good enough.

The write up reports:

Industry leaders have launched a non-profit organization called the Decentralized AI Society (DAIS), dedicated to tackling the probability of the monopolization of the artificial intelligence (AI) industry.

What is the DAIS outfit setting out to do? Here’s what Coindesk reports and this is a quote of the bullets from the write up:

Bringing capital to the decentralized AI world in what has already become an arms race for resources like graphical processing units (GPUs) and the data centers that compute together.

Shaping policy to craft AI regulations.

Education and promotion of decentralized AI.

Engineering to create new algorithms for learning models in a distributed way.

These are interesting targets. I want to point out that “decentralization” is the opposite of what the “tech giants” have already put in place; that is, concentration of money, talent, and infrastructure. Even old dogs like Oracle are now hopping on the centralized bandwagon. Even newcomers want to get as many cattle into the killing chute before the glamor of AI begins to lose some sparkles.

Several observations:

  1. DAIS has some crypto roots. These may become positive or negative. Right now regulators are interested in crypto as are other enforcement entities
  2. One of the Arnold Laws of Online is that centralization, consolidation, and concentration are emergent behaviors for online products and services. Countering this “law” and its “emergent” functionality is going to take more than conferences, a Web site, and some “logical” ideas which any “rational” person would heartily endorse. But emergent is tough to stop based on my experience.
  3. Singapore has become a hot spot for certain financial and technical activities. The problem is that nation-states may not want to be inhibited in their AI ambitions. Some may find the notion of “education” a problem as well because curricula must conform to pre-defined frameworks. Distributed is not a pre-defined anything; it is the opposite of controlled and, therefore, likely to be a bit of a problem.

Net net: Interesting idea. But Amazon, Google, Facebook, Microsoft, and some other outfits may want to talk about “distributed” but really mean the technological notion is okay, but we want as much of the money as we can get.

Stephen E Arnold, September 20, 2024

Rapid Change: The Technological Meteor Causing Craziness

September 6, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The mantra “Move fast and break things” creates opportunities for entrepreneurs and mental health professionals. “Eminent Scientist Richard Dawkins Reveals Fascinating Theory Behind West’s Mental Health Crisis” quotes Dr. Dawkins:

‘Certainly, the rate at which we are evolving genetically is miniscule compared to the rate at which we are evolving non-genetically, culturally,’ Dawkins told the hosts of the TRIGGERnometry podcast.  ‘And much of the mental illness that afflicts people may be because we are in a constantly changing unpredictable environment,’ the biologist added, ‘in a way that our ancestors were not.’

image

Thanks, Microsoft Copilot. Is that a Windows Phone doing the flame out thing?

The write up reports:

Dawkins expressed more direct concerns with other aspects of human technology’s impact on evolution: climate change and basic self-reliance in the face of a new Dark Age.  ‘The internet is a huge change, it’s gigantic change,’ he noted. ‘We’ve become adapted to it with astonishing rapidity.’ ‘if we lost electricity, if we suddenly lost the technology we’re used to,’ Dawkins worried, humanity might not be able to eve ‘begin’ to adapt in time, without great social upheaval and death… ‘Man-made extinction,’ he said, ‘it’s just as bad as the others. I think it’s tragic.’

There you go, death.

I know that brilliant people often speak carefully. Experts take time to develop their knowledge base and put words together that make complex ideas easy to understand.

From my redoubt in rural Kentucky, I have watched the panoply of events parading across my computer monitor. Among the notable moments were:

  1. Images from US cities showing homeless people slumped over either scrolling on their mobile phones or from the impact of certain compounds on their body
  2. Young people looting stores and noting similar items offered for sale on Craigslist.com-type sites
  3. Graphs of US academic performance illustrating the winners and losers of educational achievement tests
  4. The number of people driving around at times I associated with being in an office at “work” when I was younger
  5. Advertisements for prescription drugs with peculiar names and high-resolution images of people with smiles and contented lives but for the unnamed disease plaguing the otherwise cheerful folk.

What are the links between these unrelated situations and online access? I think I have a reasonably good idea. Why have experts, parents, and others required decades to figure out that flows of information are similar to sand-blasting systems. Provide electronic information to an organization, and it begins to decompose. The “bonds” which hold the people, processes, and products together are weakened. Then some break. Pump electronic information into younger people. They begin to come apart too. Give college students a tool to write their essays. Like lemmings, many take the AI solution and watch TikToks.

I am pleased that Dr. Dawkins has identified a problem. Now what’s the fix? The digital meteor has collided with human civilization. Can the dinosaurs be revivified?

Stephen E Arnold, September 6, 2024

Good Enough: The New Standard of Excellence

August 20, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]This essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay about software development. “[The] Biggest Productivity Killers in the Engineering Industry” presents three issues which add to the time and cost of a project. Let’s look at each of these factors and then one trivial downstream consequence of implementing these productivity touchpoints.

The three killers are:

  1. Working on a project until it meets one’s standards of “perfectionism.” Like “love” and “ethics”, perfectionism is often hard to define without a specific context. A designer might look at an interface and its colors and say, “It’s perfect.” The developer or, heaven forbid, the client looks and says, “That sucks.” Oh, oh.
  2. Stalling; that is, not jumping right into a project and making progress. I worked at an outfit which valued what it called “an immediate and direct response.” The idea is that action is better than reaction. Plus is demonstrates that one is not fooling around.
  3. Context switching; that is, dealing with other priorities or interruptions.

I want to highlight one of these “killers” — The need for “good enough.” The essay contains some useful illustrations. Here’s the one for the perfectionism-good enough trade off. The idea is pretty clear. As one chases getting the software or some other task “perfect” means that more time is required. The idea is that if something takes too long, then the value of chasing perfectionism hits a cost wall. Therefore, one should trade off time and value by turning in the work when it is good enough.

image

The logic is understandable. I do have one concern not addressed in the essay. I believe my concern applies to the other two productivity killers, stalling and interruptions (my term for context switching).

What is this concern?

How about doors falling off aircraft, stranded astronauts, cybersecurity which fails to protect Social Security Numbers, and city governments who cannot determine if compromised data were “good” or “corrupted.” We just know the data were compromised. There are other examples; for instance, the CrowdStrike misstep which affected only a few million people. How did CrowdStrike happen? My hunch is that “good enough” thinking was involved along with someone putting off making sure the internal controls were actually controlling and interruptions so the person responsible for software controls was pulled into a meeting instead of finishing and checking his or her work.

The difficulty is composed of several capabilities; specifically:

  1. Does the person doing the job know how to make it work in a good enough manner? In my experience, the boss may not and simply wants the fix implemented now or the product shipped immediately.
  2. Does the company have a culture of excellence or is it similar to big outfits which cannot deliver live streaming content, allow reviewers to write about a product without threatening them, or provide tactics which kill people because no one on the team understands the concept of ethical behavior? Frankly, today I am not sure any commercial enterprise cares about much other than revenue.
  3. Does anyone in a commercial organization have responsibility to determine the practical costs of shipping a product or delivering a service that does not deliver reliable outputs? Reaction to failed good enough products and services is, in my opinion, the management method applied to downstream problems.

Net net: Good enough, like it or not, is the new gold standard. Or, is that standard like the Olympic medals, an amalgam. The “real” gold is a veneer; the “good” is a coating on enough.

Stephen E Arnold, August 20, 2024

x

x

Suddenly: Worrying about Content Preservation

August 19, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]This essay is the work of a dumb dinobaby. No smart software required.

Digital preservation may be becoming a hot topic for those who  rarely think about finding today’s information tomorrow or even later today. Two write ups provide some hooks on which thoughts about finding information could be hung.

image

The young scholar faces some interesting knowledge hurdles. Traditional institutions are not much help. Thanks, MSFT Copilot. Is Outlook still crashing?

The first concerns PDFs. The essay and how to is “Classifying All of the PDFs on the Internet.” A happy quack to the individual who pursued this project, presented findings, and provided links to the data sets. Several items struck me as important in this project research report:

  1. Tracking down PDF files on the “open” Web is not something that can be done with a general Web search engine. The takeaway for me is that PDFs, like PowerPoint files, are either skipped or not crawled. The author had to resort to other, programmatic methods to find these file types. If an item cannot be “found,” it ceases to exist. How about that for an assertion, archivists?
  2. The distribution of document “source” across the author’s prediction classes splits out mathematics, engineering, science, and technology. Considering these separate categories as one makes clear that the PDF universe is about 25 percent of the content pool. Since technology is a big deal for innovators and money types, losing or not being able to access these data suggest a knowledge hurdle today and tomorrow in my opinion. An entity capturing these PDFs and making them available might have a knowledge advantage.
  3. Entities like national libraries and individualized efforts like the Internet Archive are not capturing the full sweep of PDFs based on my experience.

My reading of the essay made me recognize that access to content on the open Web is perceived to be easy and comprehensive. It is not. Your mileage may vary, of course, but this write up illustrates a large, multi-terabyte problem.

The second story about knowledge comes from the Epstein-enthralled institution’s magazine. This article is “The Race to Save Our Online Lives from a Digital Dark Age.” To  make the urgency of the issue more compelling and better for the Google crawling and indexing system, this subtitle adds some lemon zest to the dish of doom:

We’re making more data than ever. What can—and should—we save for future generations? And will they be able to understand it?

The write up states:

For many archivists, alarm bells are ringing. Across the world, they are scraping up defunct websites or at-risk data collections to save as much of our digital lives as possible. Others are working on ways to store that data in formats that will last hundreds, perhaps even thousands, of years.

The article notes:

Human knowledge doesn’t always disappear with a dramatic flourish like GeoCities; sometimes it is erased gradually. You don’t know something’s gone until you go back to check it. One example of this is “link rot,” where hyperlinks on the web no longer direct you to the right target, leaving you with broken pages and dead ends. A Pew Research Center study from May 2024 found that 23% of web pages that were around in 2013 are no longer accessible.

Well, the MIT story has a fix:

One way to mitigate this problem is to transfer important data to the latest medium on a regular basis, before the programs required to read it are lost forever. At the Internet Archive and other libraries, the way information is stored is refreshed every few years. But for data that is not being actively looked after, it may be only a few years before the hardware required to access it is no longer available. Think about once ubiquitous storage mediums like Zip drives or CompactFlash.

To recap, one individual made clear that PDF content is a slippery fish. The other write up says the digital content itself across the open Web is a lot of slippery fish.

The fix remains elusive. The hurdles are money, copyright litigation, and technical constraints like storage and indexing resources.

Net net: If you want to preserve an item of information, print it out on some of the fancy Japanese archival paper. An outfit can say it archives, but in reality the information on the shelves is a tiny fraction of what’s “out there”.

Stephen E Arnold, August 19, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta