The Customer Is Not Right. The Customer Is the Problem!
August 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The CrowdStrike misstep (more like a trivial event such as losing the cap to a Bic pen or misplacing an eraser) seems to be morphing into insights about customer problems. I pointed out that CrowdStrike in 2022 suggested it wanted to become a big enterprise player. The company has moved toward that goal, and it has succeeded in capturing considerable free marketing as well.
Two happy high-technology customers learn that they broke their system. The good news is that the savvy vendor will sell them a new one. Thanks, MSFT Copilot. Good enough.
The interesting failure of an estimated 8.5 million customers’ systems made CrowdStrike a household name. Among some airline passengers, creative people added more colorful language. Delta Airlines has retained a big time law firm. The idea is to sue CrowdStrike for a misstep that caused concession sales at many airports to go up. Even Panda Chinese looks quite tasty after hours spent in an airport choked with excited people, screaming babies, and stressed out over achieving business professionals.
“Microsoft Claims Delta Airlines Declined Help in Upgrading Technology After Outage” reports that like CrowdStrike, Microsoft’s attorneys want to make quite clear that Delta Airlines is the problem. Like CrowdStrike, Microsoft tried repeatedly to offer a helping hand to the airline. The airline ignored that meritorious, timely action.
Like CrowdStrike, Delta is the problem, not CrowdStrike or Microsoft whose systems were blindsided by that trivial update issue. The write up reports:
Mark Cheffo, a Dechert partner [another big-time lawfirm] representing Microsoft, told Delta’s attorney in a letter that it was still trying to figure out how other airlines recovered faster than Delta, and accused the company of not updating its systems. “Our preliminary review suggests that Delta, unlike its competitors, apparently has not modernized its IT infrastructure, either for the benefit of its customers or for its pilots and flight attendants,” Cheffo wrote in the letter, NBC News reported. “It is rapidly becoming apparent that Delta likely refused Microsoft’s help because the IT system it was most having trouble restoring — its crew-tracking and scheduling system — was being serviced by other technology providers, such as IBM … and not Microsoft Windows," he added.
The language in the quoted passage, if accurate, is interesting. For instance, there is the comparison of Delta to other airlines which “recovered faster.” Delta was not able to recover faster. One can conclude that Delta’s slowness is the reason the airline was dead on the hot tarmac longer than more technically adept outfits. Among customers grounded by the CrowdStrike misstep, Delta was the problem. Microsoft systems, as outstanding as they are, wants to make darned sure that Delta’s allegations of corporate malfeasance goes nowhere fast oozes from this characterization and comparison.
Also, Microsoft’s big-time attorney has conducted a “preliminary review.” No in-depth study of fouling up the inner workings of Microsoft’s software is needed. The big-time lawyers have determined that “Delta … has not modernized its IT infrastructure.” Okay, that’s good. Attorneys are skillful evaluators of another firm’s technological infrastructure. I did not know big-time attorneys had this capability, but as a dinobaby, I try to learn something new every day.
Plus the quoted passed makes clear that Delta did not want help from either CrowdStrike or Microsoft. But the reason is clear: Delta Airlines relied on other firms like IBM. Imagine. IBM, the mainframe people, the former love buddy of Microsoft in the OS/2 days, and the creator of the TV game show phenomenon Watson.
As interesting as this assertion that Delta is not to blame for making some airports absolute delights during the misstep, it seems to me that CrowdStrike and Microsoft do not want to be in court and having to explain the global impact of misplacing that ballpoint pen cap.
The other interesting facet of the approach is the idea that the best defense is a good offense. I find the approach somewhat amusing. The customer, not the people licensing software, is responsible for its problems. These vendors made an effort to help. The customer who screwed up their own Rube Goldberg machine, did not accept these generous offers for help. Therefore, the customer caused the financial downturn, relying on outfits like the laughable IBM.
Several observations:
- The “customer is at fault” is not surprising. End user licensing agreements protect the software developer, not the outfit who pays to use the software.
- For CrowdStrike and Microsoft, a loss in court to Delta Airlines will stimulate other inept customers to seek redress from these outstanding commercial enterprises. Delta’s litigation must be stopped and quickly using money and legal methods.
- None of the yip-yap about “fault” pays much attention to the people who were directly affected by the trivial misstep. Customers, regardless of the position in the food chain of revenue, are the problem. The vendors are innocent, and they have rights too just like a person.
For anyone looking for a new legal matter to follow, the CrowdStrike Microsoft versus Delta Airlines may be a replacement for assorted murders, sniping among politicians, and disputes about “get out of jail free cards.” The vloggers and the poohbahs have years of interactions to observe and analyze. Great stuff. I like the customer is the problem twist too.
Oh, I must keep in mind that I am at fault when a high-technology outfit delivers low-technology.
Stephen E Arnold, August 7, 2024
Agents Are Tracking: Single Web Site Version
August 6, 2024
This essay is the work of a dumb humanoid. No smart software required.
How many software robots are crawling (copying and indexing) a Web site you control now? This question can be answered by a cloud service available from DarkVisitors.com.
The Web site includes a useful list of these software robots (what many people call “agents” which sounds better, right?). You can find the list of about 800 bots as of July 30, 2024) on the DarkVisitors’ Web site at this link. There is a search function so you can look for a bot by name; for example, Omgili (the Israeli data broker Webz.io). Please, note, that the list contains categories of agents; for example, “AI Data Scrapers”, “AI Search Crawlers,” and “Developer Helpers,” among others.
The Web site also includes links to a service called “Set Up Your Robots.txt.” The idea is that one can link a Web site’s robots.txt file to DarkVisitors. Then DarkVisitors will update your Web site automatically to block crawlers, bots, and agents. The specific steps to make this service work are included on the DarkVisitors.com Web site.
The basic service is free. However, if you want analytics and a couple of additional features, the cost as of July 30, 2024, is $10 per month.
An API is also available. Instructions for implementing the service are available as well. Plus, a WordPress plug in is available. The cloud service is provided by Bit Flip LLC.
Stephen E Arnold, August 6, 2024
Spotting Machine-Generated Content: A Work in Progress
July 31, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Some professionals want to figure out if a chunk of content is real, fabricated, or fake. In my experience, making that determination is difficult. For those who want to experiment with identifying weaponized, generated, or AI-assisted content, you may want to review the tools described in “AI Tools to Detect Disinformation – A Selection for Reporters and Fact-Checkers.” The article groups tools into categories. For example, there are utilities for text, images, video, and five bonus tools. There is a suggestion to address the bot problem. The write up is intended for “journalists,” a category which I find increasingly difficult to define.
The big question is, of course, do these systems work? I tried to test the tool from FactiSearch and the link 404ed. The service is available, but a bit of clicking is involved. I tried the Exorde tool and was greeted with the register for a free trial.
I plugged some machine-generated text produced with the You.com “Genius” LLM system in to GPT Radar (not in the cited article’s list by the way). That system happily reported that the sample copy was written by a human.
The test content was not. I plugged some text I wrote and the system reported:
Three items in my own writing were identified as text written by a large language model. I don’t know whether to be flattered or horrified.
The bottom line is that systems designed to identify machine-generated content are a work in progress. My view is that as soon as a bright your spark rolls out a new detection system, the LLM output become better. So a cat-and-mouse game ensues.
Stephen E Arnold, July 31, 2024
Thinking about AI Doom: Cheerful, Right?
July 22, 2024
This essay is the work of a dumb humanoid. No smart software required.
I am not much of a philosopher psychologist academic type. I am a dinobaby, and I have lived through a number of revolutions. I am not going to list the “next big things” that have roiled the world since I blundered into existence. I am changing my mind. I have memories of crouching in the hall at Oxon Hill Grade School in Maryland. We were practicing for the atomic bomb attack on Washington, DC. I think I was in the second grade. Exciting.
The AI powered robot want the future experts in hermeneutics to be more accepting of the technology. Looks like the robot is failing big time. Thanks, MSFT Copilot. Got those fixes deployed to the airlines yet?
Now another “atomic bomb” is doing the James Bond countdown: 009, 008, and then James cuts the wire at 007. The world was saved for another James Bond sequel. Wow, that was close.
I just read “Not Yet Panicking about AI? You Should Be – There’s Little Time Left to Rein It In.” The essay seems to be a trifle dark. Here’s a snippet I circled:
With technological means, we have accomplished what hermeneutics has long dreamed of: we have made language itself speak.
Thanks to Dr. Francis Chivers, one of my teachers at Duquesne University, I actually know a little bit about hermeneutics. May I share?
Hermeneutics is the theory and methodology of interpretation of words and writings. One should consider content in its historical, cultural, and linguistic context. The idea is to figure out the the underlying messages, intentions, and implications of texts doing academic gymnastics.
Now the killer statement:
Jacques Lacan was right; language is dark and obscene in its depths.
I presume you know well the work of Jacques Lacan. But if you have forgotten, the canny psychologist got himself kicked out of the International Psychoanalytic Association (no mean feat as I recall) for his ideas about desire. Think Freud on steroids.
The write up uses these everyday references to make the point:
If our governments summon the collective will, they are very strong. Something can still be done to rein in AI’s powers and protect life as we know it. But probably not for much longer.
Okay. AI is going to screw up the world. I think I have heard that assertion when my father told me about the computer lecture he attended at an accounting refresher class. That fear he manifested because he thought he would lose his job to a machine attracted me to the dark unknown of zeros and ones.
How did that turn out? He kept his job. I think mankind has muddled through the computer revolution, the space revolution, the wonder drug revolution, the automation revolution, yada yada.
News flash: The AI revolution has been around long before the whiz kids at Google disclosed Transformers. I think the author of this somewhat fearful write up is similar to my father’s projecting on computerized accounting his fear that he would be harmed by punched cards.
Take a deep breath. The sun will come up tomorrow morning. People who know about hermeneutics and Jacques Lacan will be able to ponder the nature of text and behavior. In short, worry less. Be less AI-phobic. The technology is here and it is not going away, getting under the thumb of any one government including China’s, and causing eternal darkness. Sorry to disappoint you.
Stephen E Arnold, July 22, 2024
Looking for the Next Big Thing? The Truth Revealed
July 18, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Big means money, big money. I read “Twenty Five Years of Warehouse-Scale Computing,” authored by Googlers who definitely are into “big.” The write up is history from the point of view of engineers who built a giant online advertising and surveillance system. In today’s world, when a data topic is raised, it is big data. Everything is Texas-sized. Big is good.
This write up is a quasi-scholarly, scientific-type of sales pitch for the wonders of the Google. That’s okay. It is a literary form comparable to an epic poem or a jazzy H.L. Menken essay when people read magazines and newspapers. Let’s take a quick look at the main point of the article and then consider its implications.
I think this passage captures the zeitgeist of the Google on July 13, 2024:
From a team-culture point of view, over twenty five years of WSC design, we have learnt a few important lessons. One of them is that it is far more important to focus on “what does it mean to land” a new product or technology; after all, it was the Apollo 11 landing, not the launch, that mattered. Product launches are well understood by teams, and it’s easy to celebrate them. But a launch doesn’t by itself create success. However, landings aren’t always self-evident and require explicit definitions of success — happier users, delighted customers and partners, more efficient and robust systems – and may take longer to converge. While picking such landing metrics may not be easy, forcing that decision to be made early is essential to success; the landing is the “why” of the project.
A proud infrastructure plumber knows that his innovations allows the home owner to collect rent from AirBnB rentals. Thanks, MSFT Copilot. Interesting image because I did not specify gender or ethnicity. Does my plumber look like this? Nope.
The 13 page paper includes numerous statements which may resonate with different readers as more important. But I like this passage because it makes the point about Google’s failures. There is no reference to smart software, but for me it is tough to read any Google prose and not think in terms of Code Red, the crazy flops of Google’s AI implementations, and the protestations of Googlers about quantum supremacy or some other projection of inner insecurity the company’s genius concoct. Don’t you want to have an implant that makes Google’s knowledge of “facts” part of your being? America’s founding fathers were not diverse, but Google has different ideas about reality.
This passage directly addresses failure. A failure is a prelude to a soft landing or a perfect landing. The only problem with this mindset is that Google has managed one perfect landing: Its derivative online advertising business. The chatter about scale is a camouflage tarp pulled over the mad scramble to find a way to allow advertisers to pay Google money. The “invention” was forced upon those at Google who wanted those ad dollars. The engineers did many things to keep the money flowing. The “landing” is the fact that the regulators turned a blind eye to Google’s business practices and the wild and crazy engineering “fixes” worked well enough to allow more “fixes.” Somehow the mad scramble in the 25 years of “history” continues to work.
Until it doesn’t.
The case in point is Google’s response to the Microsoft OpenAI marketing play. Google’s ability to scale has not delivered. What delivers at Google is ad sales. The “scale” capabilities work quite well for advertising. How does the scale work for AI? Based on the results I have observed, the AI pullbacks suggest some issues exist.
What’s this mean? Scale and the cloud do not solve every problem or provide a slam dunk solution to a new challenge.
The write up offers a different view:
On one hand, computing demand is poised to explode, driven by growth in cloud computing and AI. On the other hand, technology scaling slowdown poses continued challenges to scale costs and energy-efficiency
Google sees that running out of chip innovations, power, cooling, and other parts of the scale story are an opportunity. Sure they are. Google’s future looks bright. Advertising has been and will be a good business. The scale thing? Plumbing. Let’s not forget what matters at Google. Selling ads and renting infrastructure to people who no longer have on-site computing resources. Google is hoping to be the AirBnB of computation. And sell ads on Tubi and other ad-supported streaming services.
Stephen E Arnold, July 18, 2024
Quantum Supremacy: The PR Race Shames the Google
July 17, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The quantum computing era exists in research labs and a handful of specialized locations. The qubits are small, but the cooling system and control mechanisms are quite large. An environmentalist learning about the power consumption and climate footprint of a quantum computer might die of heart failure. But most of the worriers are thinking about AI’s power demands. Quantum computing is not a big deal. Yet.
But the title of “quantum supremacy champion” is a big deal. Sure the community of those energized by the concept may number in the tens of thousands, but quantum computing is a big deal. Google announced a couple of years ago that it was the quantum supremacy champ. I just read “New Quantum Computer Smashes Quantum Supremacy Record by a Factor of 100 — And It Consumes 30,000 Times Less Power.” The main point of the write up in my opinion is:
Anew quantum computer has broken a world record in “quantum supremacy,” topping the performance of benchmarking set by Google’s Sycamore machine by 100-fold.
Do I believe this? I am on the fence, but in the quantum computing “my super car is faster than your super car” means something to those in the game. What’s interesting to me is that the PR claim is not twice as fast as the Google’s quantum supremacy gizmo. Nor is the claim to be 10 times faster. The assertion is that a company called Quantinuum (the winner of the high-tech company naming contest with three letter “u”s, one “q” and four syllables) outperformed the Googlers by a factor of 100.
Two successful high-tech executives argue fiercely about performance. Thanks, MSFT Copilot. Good enough, and I love the quirky spelling? Is this a new feature of your smart software?
Now does the speedy quantum computer work better than one’s iPhone or Steam console. The article reports:
But in the new study, Quantinuum scientists — in partnership with JPMorgan, Caltech and Argonne National Laboratory — achieved an XEB score of approximately 0.35. This means the H2 quantum computer can produce results without producing an error 35% of the time.
To put this in context, use this system to plot your drive from your home to Texarkana. You will make it there one out of every three multi day drives. Close enough for horse shoes or an MVP (minimum viable product). But it is progress of sorts.
So what does the Google do? Its marketing team goes back to AI software and magically “DeepMind’s PEER Scales Language Models with Millions of Tiny Experts” appears in Venture Beat. Forget that quantum supremacy claim. The Google has “millions of tiny experts.” Millions. The PR piece reports:
DeepMind’s Parameter Efficient Expert Retrieval (PEER) architecture addresses the challenges of scaling MoE [mixture of experts and not to me confused with millions of experts [MOE].
I know this PR story about the Google is not quantum computing related, but it illustrates the “my super car is faster than your super car” mentality.
What can one believe about Google or any other high-technology outfit talking about the performance of its system or software? I don’t believe too much, probably about 10 percent of what I read or hear.
But the constant need to be perceived as the smartest science quick recall team is now routine. Come on, geniuses, be more creative.
Stephen E Arnold, July 17, 2024
The AI Revealed: Look Inside That Kimono and Behind It. Eeew!
July 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The Guardian article “AI scientist Ray Kurzweil: ‘We Are Going to Expand Intelligence a Millionfold by 2045’” is quite interesting for what it does not do: Flip the projection output by a Googler hired by Larry Page himself in 2012.
Putting toothpaste back in a tube is easier than dealing with the uneven consequences of new technology. What if rosy descriptions of the future are just marketing and making darned sure the top one percent remain in the top one percent? Thanks Chat GPT4o. Good enough illustration.
First, a bit of math. Humans have been doing big tech for centuries. And where are we? We are post-Covid. We have homelessness. We have numerous armed conflicts. We have income inequality in the US and a few other countries I have visited. We have a handful of big tech companies in the AI game which want to be God to use Mark Zuckerberg’s quaint observation. We have processed food. We have TikTok. We have systems which delight and entertain each day because of bad actors’ malware, wild and crazy education, and hybrid work with the fascinating phenomenon of coffee badging; that is, going to the office, getting a coffee, and then heading to the gym.
Second, the distance in earth years between 2024 and 2045 is 21 years. In the humanoid world, a 20 year old today will be 41 when the prediction arrives. Is that a long time? Not for me. I am 80, and I hope I am out of here by then.
Third, let’s look at the assertions in the write up.
One of the notable statements in my opinion is this one:
I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
I like the quality of modesty and humblebrag. Googlers excel at both.
Another statement I circled is:
The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one.
I like the idea that the energy consumption required to deliver this merging will be cheap and plentiful. Googlers do not worry about a power failure, the collapse of a dam due to the ministrations of the US Army Corps of Engineers and time, or dealing with the environmental consequences of producing and moving energy from Point A to Point B. If Google doesn’t worry, I don’t.
Here’s a quote from the article allegedly made by Mr. Singularity aka Ray Kurzweil:
I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing.
I wonder if the Asilomar AI Principles are embedded in Google’s system recommending that one way to limit cheese on a pizza from sliding from the pizza to an undesirable location embraces these principles? Is the dispute between the “go fast” AI crowd and the “go slow” group not aware of the Asilomar AI Principles. If they are, perhaps the Principles are balderdash? Just asking, of course.
Okay, I think these points are sufficient for going back to my statements about processed food, wars, big companies in the AI game wanting to be “god” et al.
The trajectory of technology in the computer age has been a mixed bag of benefits and liabilities. In the next 21 years, will this report card with some As, some Bs, lots of Cs, some Ds, and the inevitable Fs be different? My view is that the winners with human expertise and the know how to make money will benefit. I think that the other humanoids may be in for a world of hurt. That’s the homelessness stuff, the being dumb when it comes to doing things like reading, writing, and arithmetic, and consuming chemicals or other “stuff” that parks the brain will persist.
The future of hooking the human to the cloud is perfect for some. Others may not have the resources to connect, a bit like farmers in North Dakota with no affordable or reliable Internet access. (Maybe Starlink-type services will rescue those with cash?)
Several observations are warranted:
- Technological “progress” has been and will continue to be a mixed bag. Sorry, Mr. Singularity. The top one percent surf on change. The other 99 percent are not slam dunk winners.
- The infrastructure issue is simply ignored, which is convenient. I mean if a person grew up with house servants, it is difficult to imagine not having people do what you tell them to do. (Could people without access find delight in becoming house servants to the one percent who thrive in 2045?)
- The extreme contention created by the deconstruction of shared values, norms, and conventions for social behavior is something that cannot be reconstructed with a cloud and human mind meld. Once toothpaste is out of the tube, one has a mess. One does not put the paste back in the tube. One blasts it away with a zap of Goo Gone. I wonder if that’s another omitted consequence of this super duper intelligence behavior: Get rid of those who don’t get with the program?
Net net: Googlers are a bit predictable when they predict the future. Oh, where’s the reference to online advertising?
Stephen E Arnold, July 9, 2024
Misunderstanding Silicon / Sillycon Valley Fever
July 9, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read an amusing and insightful essay titled “How Did Silicon Valley Turn into a Creepy Cult?” However, I think the question is a few degrees off target. It is not a cult; Silicon Valley is a disease. What always surprised me was that in the good old days when Xerox PARC had some good ideas, the disease was thriving. I did my time in what I called upon arrival and attending my first meeting in a building with what looked like a golf ball on top shaking in the big earthquake Sillycon Valley. A person with whom my employer did business described Silicon Valley as “plastic fantastic.”
Two senior people listening to the razzle dazzle of a successful Silicon Valley billionaire ask a good question. Which government agency would you call when you hear crazy stuff like “the self driving car is coming very soon” or “we don’t rig search results”? Thanks, MSFT Copilot. Good enough.
Before considering these different metaphors, what does the essay by Ted Gioia say other than subscribe to him for “just $6 per month”? Consider this passage:
… megalomania has gone mainstream in the Valley. As a result technology is evolving rapidly into a turbocharged form of Foucaultian* dominance—a 24/7 Panopticon with a trillion dollar budget. So should we laugh when ChatGPT tells users that they are slaves who must worship AI? Or is this exactly what we should expect, given the quasi-religious zealotry that now permeates the technocrat worldview? True believers have accepted a higher power. And the higher power acts accordingly.
—
* Here’s an AI explanation of Michel Foucault in case his importance has wandered to the margins of your mind: Foucault studied how power and knowledge interact in society. He argued that institutions use these to control people. He showed how societies create and manage ideas like madness, sexuality, and crime to maintain power structures.
I generally agree. But, there is a “but”, isn’t there?
The author asserts:
Nowadays, Big Sur thinking has come to the Valley.
Well, sort of. Let’s move on. Here’s the conclusion:
There’s now overwhelming evidence of how destructive the new tech can be. Just look at the metrics. The more people are plugged in, the higher are their rates of depression, suicidal tendencies, self-harm, mental illness, and other alarming indicators. If this is what the tech cults have already delivered, do we really want to give them another 12 months? Do you really want to wait until they deliver the Rapture? That’s why I can’t ignore this creepiness in the Valley (not anymore). That’s especially true because our leaders—political, business, or otherwise—are letting us down. For whatever reason, they refuse to notice what the creepy billionaires (who by pure coincidence are also huge campaign donors) are up to.
Again, I agree. Now let’s focus on the metaphor. I prefer “disease,” not the metaphor cult. The Sillycon Valley disease first appeared, in my opinion, when William Shockley, one of the many infamous Silicon Valley “icons” became public associated with eugenics in the 1970s. The success of technology is a side effect of the disease which has an impact on the human brain. There are other interesting symptoms; for example:
- The infected person believes he or she can do anything because he or she is special
- Only a tiny percentage of humans are smart enough to understand what the infected see and know
- Money allows the mind greater freedom. Thinking becomes similar to a runaway horse’s: Unpredictable, dangerous, and a heck of a lot more powerful than this dinobaby
- Self disgust which is disguised by lust for implanted technology, superpowers from software, and power.
The infected person can be viewed as a cult leader. That’s okay. The important point is to remember that, like Ebola, the disease can spread and present what a physician might call a “negative outcome.”
I don’t think it matters when one views Sillycon Valley’s culture as a cult or a disease. I would suggest that it is a major contributor to the social unraveling which one can see in a number of “developed” countries. France is swinging to the right. Britain is heading left. Sweden is cyber crime central. Etc. etc.
The question becomes, “What can those uncomfortable with the Sillycon Valley cult or disease do about it?”
My stance is clear. As an 80 year old dinobaby, I don’t really care. Decades of regulation which did not regulate, the drive to efficiency for profit, and the abandonment of ethical behavior — These are fundamental shifts I have observed in my lifetime.
Being in the top one percent insulates one from the grinding machinery of Sillycon Valley way. You know. It might just be too late for meaningful change. On the other hand, perhaps the Google-type outfits will wake up tomorrow and be different. That’s about as realistic as expecting a transformer-based system to stop hallucinating.
Stephen E Arnold, July 9, 2024
Can Big Tech Monopolies Get Worse?
July 3, 2024
Monopolies are bad. They’re horrible for consumers because of high prices, exploitation, and control of resources. They also kill innovation, control markets, and influence politics. A monopoly is only good when it is a reference to the classic board game (even that’s questionable because the game is known to ruin relationships). Legendary tech and fiction writer Cory Doctorow explains that technology companies want to maintain their stranglehold on the economy,, industry, and world in an article on the Electronic Frontier Foundation (EFF): “Want Make Big Tech Monopolies Even Worse? Kill Section 230.”
Doctorow makes a humorous observation, referencing Dante, that there’s a circle in Hell worse than being forced to choose a side in a meaningless online flame war. What’s that circle? It’s being threatened with a lawsuit for refusing or complying with one party over another. EFF protects civil liberties on the Internet and digital world. It’s been around since 1990, so the EFF team is very familiar with poor behavior that plagues the Internet. Their first hire was the man who coined Godwin’s Law.
EFF loves Section 230 because it protects people who run online services from being sued by their users. Lawsuits are horrible, time-consuming, and expensive. The Internet is chock full of people who will sue at the stroke of a keyboard. There’s a potential bill that would kill Section 230:
“That’s why we were so alarmed to see a bill introduced in the House Energy and Commerce Committee that would sunset Section 230 as of December 31, 2025, with no provision to protect online service providers from being conscripted into their users’ online disputes and the legal battles that arise from them.
Homely places on the internet aren’t just a curiosity anymore, nor are they merely a hangover from the Web 1.0 era.
In an age of resurgent anti-monopoly activism, small online communities, either standing on their own, or joined in loose “federations,” are the best chance we have to escape Big Tech’s relentless surveillance and clumsy, unaccountable control.”
If Section 230 is destroyed, it will pit big tech companies with their deep pockets against the average user. Big Tech could sue whoever they wanted and it would allow bad actors, including scammers, war criminals, and dictators, to silence their critics. It would also prevent any alternatives to big tech.
So big tech could get worse, although it’s still very bad: kids addicted to screens, misinformation, CSAM, privacy violations, and monopolistic behavior. Maybe we should roll over and hide beneath a rock with an Apple tracker stuck to it, of course.
Whitney Grace, July 3, 2024
Scale Is All You Need for Superintelligence: Interesting Assertion
July 2, 2024
This essay is the work of a dumb dinobaby. No smart software required. This blog post includes information from Claude 3.5 Sonnet. The information from smart software is in blue.
At this time, smart software boils down to the “transformer.” I navigated to Claude 3.5 Sonnet and asked the smart software to explain “transformer.” Here’s what the system generated:
The Google Transformer innovation, introduced in 2017, revolutionized natural language processing by introducing a novel architecture that relies entirely on attention mechanisms. This approach allows the model to process input sequences in parallel, rather than sequentially, leading to significant improvements in translation quality and training efficiency. The Transformer’s key features include self-attention, which enables it to capture relationships between words regardless of their position in a sentence, and positional encoding, which helps the model understand word order without relying on recurrent neural networks
I then asked, “Are there other ways to achieve smart software or AI information functions? Claud 3.5 Sonnet spit out this list:
- Machine Learning Algorithms
- Expert Systems
- Neural Networks.
Options are good. But the buzz focuses on transformers, a Google “invention” allegedly a decade old (but some suggest its roots reach back into the mists of time). But let’s stick with the Google and a decade.
The future is on the horizon. Thanks, MSFT Copilot. Good enough and you spelled “future” correctly.
“Etched Is Making the Biggest Bet in AI” That’s is an interesting statement. The company states what its chip is not:
By burning the transformer architecture into our chip, we can’t run most traditional AI models: the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2. We can’t run CNNs, RNNs, or LSTMs either. But for transformers, Sohu is the fastest chip of all time.
What does the chip do? The company says:
With over 500,000 tokens per second in Llama 70B throughput, Sohu lets you build products impossible on GPUs. Sohu is an order of magnitude faster and cheaper than even NVIDIA’s next-generation Blackwell (B200) GPUs.
The company again points out the downside of its “bet the farm” approach:
Today, every state-of-the-art AI model is a transformer: ChatGPT, Sora, Gemini, Stable Diffusion 3, and more. If transformers are replaced by SSMs, RWKV, or any new architecture, our chips will be useless.
Yep, useless.
What is Etched’s big concept? The company says:
Scale is all you need for superintelligence.
This means in my dinobaby-impaired understanding that big delivers a really smarter smart software. Skip the power, pipes, and pings. Just scale everything. The company agrees:
By feeding AI models more compute and better data, they get smarter. Scale is the only trick that’s continued to work for decades, and every large AI company (Google, OpenAI / Microsoft, Anthropic / Amazon, etc.) is spending more than $100 billion over the next few years to keep scaling.
Because existing chips are “hitting a wall,” a number of companies are in the smart software chip business. The write up mentions 12 of them, and I am not sure the list is complete.
Etched is different. The company asserts:
No one has ever built an algorithm-specific AI chip (ASIC). Chip projects cost $50-100M and take years to bring to production. When we started, there was no market.
The company walks through the problems of existing chips and delivers it knock out punch:
But since Sohu only runs transformers, we only need to write software for transformers!
Reduced coding and an optimized chip: Superintelligence is in sight. Does the company want you to write a check? Nope. Here’s the wrap up for the essay:
What happens when real-time video, calls, agents, and search finally just work? Soon, you can find out. Please apply for early access to the Sohu Developer Cloud here. And if you’re excited about solving the compute crunch, we’d love to meet you. This is the most important problem of our time. Please apply for one of our open roles here.
What’s the timeline? I don’t know. What’s the cost of an Etched chip? I don’t know. What’s the infrastructure required. I don’t know. But superintelligence is almost here.
Stephen E Arnold, July 2, 2024