A Possible Tech Giant Wants Regulation? Mommy, When Do I Have to Be Home?
February 7, 2023
I am interested in people who want government to regulate their actions. I have a sneaking suspicion that the request is either uninformed, a sham, or an indirect statement like “We will abuse technology every possible way we can think of.”
You may have a different point of view. That’s super. But when I read articles like “ChatGPT Must Be Regulated and AI Can Be Used by Bad Actors, Warns OpenAI’s Chief Technology Officer” I find the statement disingenuous.
But, first, let’s look at a snippet of the write up:
Asked if it’s too early for regulators to get involved, Murati told _Time_, “It’s not too early. It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”
Mr. Mira Murati is the chief technology officer of the outfit doing business as OpenAI.
Who can disagree that “it is important for everyone to start getting involved.” I assume that means attorneys general, local, county, and state officials, those in Washington, DC—but “everyone” is a much bigger group. It is, if I recall correctly one of my logic professors, a categorical affirmative.
That’s impossible.
Thus, the statement is horse feathers or horse ridge or some other metaphor for a huge slice of Aldi baloney.
My take on what’s behind this statement is my opinion, so stop reading. I am pretty flexible now that I am a dinobaby and easily irritated:
- The plea to be regulated is to get a committee to promulgate rules. Once the rules are known, the person demanding rules can figure out how to circumvent without breaking the law, getting fined, or killed by a self driving car which is situationally stupid. I can heard the response to the regulations, “Mommy, why do I have to come in by 10 pm. None of my friends has to be home so early.” Yep, mommy stuff.
- The statement makes the high-tech outfit seem so darned rational and accommodating. I hear, “We have invented something that can do evil. We need help managing what we have built and turned loose on a social construct obsessed with NFL football, TikToks, and quiet quitting. The goal is what I call positive posturing.
- The company has advisors and lawyers who want rules. Lobbying can influence those rules to benefit the companies with a technology advantage. The goal is to cement that power position. How quickly did the US government move to action against AT&T, Microsoft, or Google? Yeah.
The write up says, “Regulations, please.” I hear, “Mommy, why do I have to come home at 10 pm?” The idea is to get beyond barriers so lawyers can explain that the child was not involved in drug bust.
Stephen E Arnold February 7, 2023
Google: So Clever, So So Clever
February 6, 2023
I read a good summary of the US and state governments’ allegations about the behavior of the Google ad machine. I recommend “How Google Manipulated Digital Ad Prices and Hurt Publishers, Per DOJ.” The write up provides some useful insight into how the Google management environment has created a culture of being really cute, possibly really clever. The methods employed reminded me of a group of high school science club members pranking the hapless administration of a secondary school. Fun and being able to be smarter than everyone else is the name of the game.
Let me cite one example from the write up because it is short, to the point, and leaves little room for a statement like, “Senator, I did not know how the system’s components worked. I will provide the information you need. Again, I am sorry.” Does that line sound familiar? I left out the “Senator, thank you for the question” but otherwise the sentiment seems in tune with the song some companies sing to semi-aware elected officials.
Google Ads allegedly submitted two bid prices, unbeknownst to advertisers and publishers, effectively controlling the winning bids and the price floors. To entrench its market power even further, the suit argues Google started manipulating ad prices under a different method, which it dubbed “Bernanke.” Starting in 2013, according to the suit, Google Ads would submit bid prices to AdX above the amount advertisers had budgeted, in order to win high-value impressions for a group of publishers — the ones most likely to switch ad tech platforms. This insight could only be obtained by leveraging data in Google’s own publisher ad server. Once AdX cleared the bids, Google Ads would offset the losses by charging higher fees to other publishers less likely to switch ad tech providers. This scheme allegedly helped Google lock in key publishers away from other ad exchanges and ad buying tools, all while maintaining its profits at the expense of other smaller publishers.
Once of the best jobs I had in my life was my stint at the Courier-Journal & Louisville Times Co. That newspaper, like many others, has been unable to cope with the digital revolution. Outfits like Google and their clever methods may have hastened the financial precipice on which many publishers teeter.
My concern is that this particular method — just one of many I assume — has been grinding out cash for the Google for about a decade. Now there is some action, but I think the far more important challenge Google faces will be the active consumer uptake of newer options. These may prove to be familiar with Clever Avenue.
I hope these AI-informed travelers take the road called Ethical Behavior Boulevard.
Stephen E Arnold, February 6, 2023
You Have Been Googled!
February 1, 2023
If the information in “Google Engineer Who Was Laid Off While on Mental Health Leave Says She Silently Mourned After Receiving Her Severance Email at 2 a.m.” a new meaning for Google may have surfaced. The main point of the write up is that Google has been trimming some of its unwanted trees and shrubs (Populus Quisquilias). These are plants which have been cultivated with Google ideas, beliefs, and nutrients. But now: Root them out of the Google greenhouse, the spaces between cubes, and the grounds near lovely Shoreline Drive.
The article states:
Neil said she had an inclination that layoffs were coming but assumed she would be safe because she was already on leave. According to Neil, she “bled for Google.” She said she met and exceeded performance expectations, while also enjoying her job. Google felt like a safe and stable environment, where the risk of being laid off was very low, Neil said. She described the layoff process as “un-Googley” and done without care. “Now I’m left here having to find a job for the first time in years after being on mental health leave in quite possibly one of the most difficult hiring situations and housing markets,” Neil said. Google won’t allow Neil to go back to her office to drop off her work laptop and other devices, she said. The company has told her to meet security somewhere near the office, or ship the items in a box, she added.
I want to suggest that the new term for this management approach be called “googled.” To illustrate: In order to cut expenses, the firm googled 3,000 employees. Thus, the shift in meaning from “look up” to “look for your future elsewhere” represents a fresh approach for a cost conscious company.
It may be a signal of honor to have been “googled.” For the individual referenced in the write up, the pain and mental stress may take some time to go away. Does Google management know that Populus Quisquilias has feelings?
Stephen E Arnold, February 1, 2023
YouTube Biggie and a CryptoZoo
February 1, 2023
Whenever YouTuber Logan Paul makes headlines it is always a cringeworthy event. Paul does not disappoint reports the BBC, because his latest idiotic incident involves cryptocurrency: “YouTube Star Logan Paul Apologizes For CryptoZoo Project Failure.” Using his celebrity, Paul encouraged his audience to purchase cryptocurrency items for a game project. He promised that the game was “really fun” and would make players money.
It has been more than a year since Paul made the announcement and the game still has not surfaced. It appears he has abandoned the project and people are hammering YouTube to investigate yet another Paul disaster.
The game was an autonomous ecosystem where players, called ZooKeepers, bought, sold, and traded cartoon eggs that would hatch into random animals. The images would be NFTs, then ZooKeepers could breed the images to spawn new species and earn $ZOO cryptocurrency. The game was supposed to debut in 2022, but nothing has surfaced. The game sold millions of dollars worth of crypto and NFTS and its Discord server has 500 members.
An armchair cryptocurrency detective took on the case:
“Last month, cryptocurrency scam investigator Stephen Findeisen, known as Coffeezilla on YouTube, began a three-part video series about CryptoZoo, calling it a “scam”.
The American spoke to investors around the world who claimed to have spent hundreds, sometimes thousands of dollars on CryptoZoo items and were angry at Paul.
In his videos, that have had nearly 18 million views, Coffeezilla accused Paul of scamming investors and abandoning them after selling them “worthless” digital items.
On Thursday, Paul posted an angry rebuttal video admitting that he made a mistake hiring “conmen” and “felons” to run his project, but denied the failures were his fault.
He accused Mr. Findeisen of getting facts wrong and threatened to sue him.”
Paul has since deleted the video, apologized to Coffezilla, and wrote on Discord that he would take responsibility, and make a plan for CryptoZoo.
Perhaps Mr. Paul needs to make Brenda Lee’s “I’m Sorry” his theme song, learn to keep his mouth shut, and focus on boxing.
Whitney Grace, February 1, 2023
Does Google Have the Sony Betamax of Smart Software?
January 30, 2023
Does Google have the Sony Betamax of smart software? If you cannot answer this question as well as ChatGPT, you can take a look at “VHS or Beta? A Look Back at Betamax, and How Sony Lost the VCR Format War to VHS Recorders.” Boiling down the problem Sony faced, let me suggest better did not win. Maybe adult content outfits tipped the scales? Maybe not? The best technology does not automatically dominate the market.
Flash forward from the anguish of Sony in the 1970s and the even more excruciating early 1980s to today. Facebook dismisses ChatGPT as not too sophisticated. I heard one of the big wizards at the Zuckbook say this to a Sillycon Alley journalist on a podcast called Big Technology. The name says it all. Big technology, just not great technology. That’s what the Zuckbooker suggested everyone’s favorite social media company has.
The Google has emitted a number of marketing statements about more than a dozen amazing smart software apps. These, please, note, will be forthcoming. The most recent application of the Google’s advanced, protein folding, Go winning system is explained in words—presumably output by a real journalist—in “Google AI Can Create Music in Any Genre from a Text Description.” One can visualize the three exclamation points that a human wanted to insert in this headline. Amazing, right. That too is forthcoming. The article quickly asserts something that could have been crafted by one of Googzilla’s non-terminated executives believes:
MusicLM is surprisingly talented.
The GOOG has talent for sure.
What the Google does not have is the momentum of consumer craziness. Whether it the buzz among some high school and college students that ChatGPT can write or help write term papers or the in-touch outfit Buzzfeed which will use ChatGPT to create listicles — the indomitable Alphabet is not in the information flow.
But the Google technology is better. That sounds like a statement I heard from a former wizard at RCA who was interviewing for a job at the blue chip consulting firm for which I worked when I was a wee lad. That fellow invented some type of disc storage system, maybe a laser-centric system. I don’t know. His statement still resonates with me today:
The Sony technology was better.
The flaw is that the better technology can win. The inventors of the better technology or the cobblers who glue together other innovations to create a “better” technology never give up their convictions. How can a low resolution, cheaper recording solution win? The champions of Sony’s technology complained about fairness a superior resolution for the recorded information.
I jotted down this morning (January28, 2023), why Googzilla may be facing, like the Zuckbook, a Sony Betamax moment:
- The demonstrations of the excellence of the Google smart capabilities are esoteric and mean essentially zero outside of the Ivory Tower worlds of specialists. Yes, I am including the fans of Go and whatever other game DeepMind can win. Fan frenzy is not broad consumer uptake and excitement.
- Applications which ordinary Google search users can examine are essentially vaporware. The Dall-E and ChatGPT apps are coming fast and furious. I saw a database of AI apps based on these here-and-now systems, and I had no idea so many clever people were embracing the meh-approach of OpenAI. “Meh,” obviously may not square with what consumers perceive or experience. Remember those baffled professors or the Luddite lawyers who find smart software a bit of a threat.
- OpenAI has hit a marketing home run. Forget the Sillycon Alley journalists. Think about the buzz among the artists about their potential customers typing into a search box and getting an okay image. Take a look at Googzilla trying to comprehend the Betamax device.
Toss in the fact that Google’s ad business is going to have some opportunities to explain why owning the bar, the stuff on the shelves, the real estate, and the payment system is a net gain for humanity. Yeah, that will be a slam dunk, won’t it?
Perhaps more significantly, in the post-Covid crazy world in which those who use computers reside, the ChatGPT and OpenAI have caught a big wave. That wave can swamp some very sophisticated, cutting edge boats in a short time.
Here’s a question for you (the last one in this essay I promise): Can the Google swim?
Stephen E Arnold, January 30, 2023
MBAs Dig Up an Old Chestnut to Explain Tech Thinking
January 19, 2023
Elon Musk is not afraid to share, it is better to say tweet, about his buyout and subsequent takeover of Twitter. He has detailed how he cleared the Twitter swamp of “woke employees” and the accompanying “woke mind virus.” Musk’s actions have been described as a prime example of poor leadership skills and lauded as a return to a proper business. Musk and other rich business people see the current times as a war, but why? Vox’s article, “The 80-Year-Old Book That Explains Tech’s New Right-Wing Tilt” explains writer Antonio García Martínez:
“…who’s very plugged into the world of right-leaning Silicon Valley founders. García Martínez describes a project that looks something like reverse class warfare: the revenge of the capitalist class against uppity woke managers at their companies. ‘What Elon is doing is a revolt by entrepreneurial capital against the professional-managerial class regime that otherwise everywhere dominates (including and especially large tech companies),’ García Martínez writes. On the face of it, this seems absurd: Why would billionaires who own entire companies need to “revolt” against anything, let alone their own employees?”
García Martínez says the answer is in James Burnham’s 1941 book: The Managerial Revolution: What Is Happening In The World. Burnham wrote that the world was in late-stage capitalism, so the capitalist bigwigs would soon lose their power to the “managerial class.” These are people who direct industry and complex state operations. Burnham predicted that Nazi Germany and Soviet Russia would inevitably be the winners. He was wrong.
Burnham might have been right about the unaccountable managerial class and experts in the economy, finance, and politics declare how it is the best description of the present. Burnham said the managerial revolution would work by:
“The managerial class’s growing strength stems from two elements of the modern economy: its technical complexity and its scope. Because the tasks needed to manage the construction of something like an automobile require very specific technical knowledge, the capitalist class — the factory’s owners, in this example — can’t do everything on their own. And because these tasks need to be done at scale given the sheer size of a car company’s consumer base, its owners need to employ others to manage the people doing the technical work.
As a result, the capitalists have unintentionally made themselves irrelevant: It is the managers who control the means of production. While managers may in theory still be employed by the capitalist class, and thus subject to their orders, this is an unsustainable state of affairs: Eventually, the people who actually control the means of production will seize power from those who have it in name only.
How would this happen? Mainly, through nationalization of major industry.”
Burnham believed it was best if the government managed the economy, i.e. USSR and Nazi Germany. The authoritarian governments killed that idea, but Franklin Roosevelt laid the groundwork for an administrative state in the same vein as the New Deal.
The article explains current woke cancel culture war is viewed as a continuation of the New Deal. Managers have more important roles than the CEOs who control the money, so the CEOs are trying to maintain their relevancy and power. It could also be viewed as a societal shift towards a different work style and ethic with the old guard refusing to lay down their weapons.
Does Burnham’s novel describe Musk’s hostile and/or needed Twitter takeover? Yes and no. It depends on the perspective. It does make one wonder if big tech management are following the green light from 1651 Thomas Hobbes’ Leviathan?
Whitney Grace, January 19, 2023
Google and Its PR Response to the ChatGPT Buzz Noise
January 16, 2023
A crazy wave is sweeping through the technology datasphere. ChatGPT, OpenAI, Microsoft, Silicon Valley pundits, and educators are shaken, not stirred, into the next big thing. But where is the Google in this cyclone bomb of smart software? The craze is not for a list of documents matching a user’s query. People like students and spammers are eager for tools that can write, talk, draw pictures, and code. Yes, code more good enough software, by golly.
In this torrential outpouring of commentary, free demonstrations, and venture capitalists’ excitement, I want to ask a simple question: Where’s the Google? Well, to the Google haters, the GOOG is in panic mode. RED ALERT, RED ALERT.
From my point of view, the Google has been busy busy being Google. Its head of search Prabhakar Raghavan is in the spotlight because some believe he has missed the Google bus taking him to the future of search. The idea is that Googzilla has been napping before heading to Vegas to follow the NCAA basketball tournament in incorrect. Google has been busy, just not in a podcast, talking heads, pundit tweeting way.
Let’s look at two examples of what Google has been up to since ChatGPT became the next big thing in a rather dismal economic environment.
The first is the appearance of a articles about the forward forward method for training smart software. You can read a reasonably good explanation in “What Is the “Forward-Forward” Algorithm, Geoffrey Hinton’s New AI Technique?” The idea is that some of the old-school approaches won’t work in today go-go world. Google, of course, has solved this problem. Did the forward forward thing catch the attention of the luminaries excited about ChatGPT? No. Why? Google is not too good at marketing in my opinion. ChatGPT is destined to be a mere footnote. Yep, a footnote, probably one in multiple Google papers like Understanding Diffusion Models: A Unified Perspective (August 2022). (Trust me. There are quite a few of these papers with comments about the flaws of ChatGPT-type software in the “closings” or “conclusions” to these Google papers.)
The second is the presentation of information about Google’s higher purpose. A good example of this is the recent interview with a Googler involved in the mysterious game-playing, protein-folding outfit called DeepMind. “DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution” does a good job of hitting the themes expressed in technical papers, YouTube video interviews, and breezy presentations at smart software conferences. This is a follow on to Google’s talking with an MIT researcher Lex Fridman about the Google engineer who thought the DeepMind system was a person and a two hour chat with the boss of DeepMind. The CEO video is at this link.
I want to highlight three points from this interview/article.
[A] Let’s look at this passage from the Time Magazine interview with the CEO of DeepMind:
Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.
I interpret this to mean that Google has better, faster, cheaper, and smarter NLP technology. Notice the idea of putting competitors in “mere footnotes.” This is an academic, semi-polite way to say, “Loser.”
[B] DeepMind alleged became a unit of Alphabet Google for this reason:
Google was “very happy to accept” DeepMind’s ethical red lines “as part of the acquisition.”
Forget the money. Think “ethical red lines.” Okay, that’s an interesting concept for a company which is in the data hoovering business, sells advertising, has a bureaucratic approach I heard described as described as slime mold, and is being sued for assorted allegations of monopolistic behavior in several countries.
[C] The Time Magazine article includes this statement:
Back at DeepMind’s spiral staircase, an employee explains that the DNA sculpture is designed to rotate, but today the motor is broken. Closer inspection shows some of the rungs of the helix are askew.
Interesting choice of words: “The motor is broken” and “askew.” Is this irony or just the way it is when engineering has to be good enough and advertising powers the buzzing nervous system of the company?
From my point of view, Google has been responding to ChatGPT with academic reminders that the online advertising outfit has a better mousetrap. My thought is that Google knew ChatGPT would be a big deal. That realization sparked the attempt by Google to answer questions with cards and weird little factoids related to the user’s query. The real beef or “wood behind” the program is the catchy forward forward campaign. How is that working out? I don’t have a Google T shirt that spells out Forward Forward. Have you seen one? My research suggests that Google wants to corner the market on low cost training data. Think Snorkel. Google pushes synthetic data because it is not real and, therefore, cannot be dragged into court over improper use of Web-accessible content. Google, I believe, wants to become the Trader Joe’s of off-the-shelf training data and ready-to-run smart software models. The idea has been implemented to some degree at Amazon’s AWS as I recall.
Furthermore, Google’s idea of a PR blitz is talking with an MIT researcher Lex Fridman. Mr. Fridman interviewed the the Google engineer (now a Xoogler) who thought the DeepMind system was a person and sort of alive. Mr. Fridman also spoke with the boss of DeepMind about smart software. (The video is at this link.) The themes are familiar: Great software, more behind the curtains, and doing good with Go and proteins.
Google faces several challenges with its PR effort to respond to ChatGPT:
- I am of the opinion that most people, even those involved in smart software, are not aware that Google has been running a PR and marketing campaign to make clear superiority of its system and method. No mere footnote for the Google. We do proteins. We snorkel. We forward forward. The problem is that ChatGPT is everywhere, and people like high school students are talking about it. Even artists are aware of smart software and instant image generation OpenAI style.
- Google remains ill equipped to respond to ChatGPT’s sudden thunder showers and wind storms of social buzz. Not even Google’s rise to fame matches what has happened to OpenAI and ChatGPT in the last few months. There are rumors that Microsoft will do more than provided Azure computing resources for ChatGPT. Microsoft may dump hard cash billions into OpenAI. Who is not excited to punch a button and have Microsoft Word write that report for you? I think high school students will embrace the idea; teachers and article writers at CNet, not so much.
- Retooling Google’s decades old systems and methods for the snappy ChatGPT approach will take time and money. Google has the money, but in the world of bomb cyclones the company may not have time. Technology fortunes can vaporize quickly like the value of a used Tesla on Cars and Bids.
Net net: Google, believe it or not, has been in its own Googley way trying to respond to its ChatGPT moment. What the company has been doing is interesting. However, unlike some of Google’s technical processes, the online information access world is able to change. Can Google? Will high school students and search engine optimization spam writers care? What about “axis of evil” outfits and their propaganda agencies? What about users who do not know when a machine-generated output is dead wrong? Google may not face an existential crisis, but the company definitely knows something is shaking the once-solid foundations of the buildings on Shoreline Drive.
Stephen E Arnold, January 16, 2023
The Pain of Prabhakar Becomes a Challenge for Microsoft
January 9, 2023
A number of online “real” news outfits have reported and predicted that ChatGPT will disrupt the Google’s alleged monopoly in online advertising. The excitement is palpable because it is not fashionable to beat up the technology giants once assumed to have feet made of superhero protein.
The financial information service called Seeking Alpha published “Bing & ChatGPT Might Work Together, Could Be Revolutionary.” My mind added “We Hope!” to the headline. Even the allegedly savvy Guardian Newspaper weighed in with “Microsoft Reportedly to Add ChatGPT to Bing Search Engine.” Among the examples I noted is the article in The Information (registration required, thank you) called “Ghost Writer: Microsoft Looks to Add OpenAI’s Chatbot Technology to Word, Email.”
The origin of this boomlet in Bing will kill Google may be in the You.com Web search system which includes this statement. I have put in bold face the words and phrases revealing Microsoft’s awareness of You.com:
YouChat does not use Microsoft Bing web, news, video or other Microsoft Bing APIs in any manner. Other Web links, images, news, and videos on you.com are powered by Microsoft Bing. Read Microsoft Bing Privacy Policy
I am not going to comment on the usefulness of the You.com search results. Instead, navigate to www.you.com and run some queries. I am a dinobaby, and I like command line searching. You do not need to criticize me for my preference for Stone Age search tools. I am 78 and will be in one of Dante’s toasty environments. Boolean search? Burn for eternity. Okay with me.
I would not like to be Google’s alleged head of search (maybe the word “nominal” is preferable to some. That individual is a former Verity wizard named Prabhakar Raghavan. His domain of Search, Google Assistant, Ads, Commerce, and Payments has been expanded by the colorful Code Red activity at the Google. Mr. Raghavan’s expertise and that of his staff appears to be ill-equipped to deal with one of least secret of Microsoft’s activities. Allegedly more Google wizards have been enlisted to deal with this existential threat to Google’s search and online ad business. Well, Google is two decades old, over staffed, and locked in its aquarium. It presumably watched Microsoft invest a billion into ChatGPT and did not respond. Hello, Prabhakar?
The “value” has looked like adding ChatGPT-like functions and maybe some of its open sourciness to Microsoft’s ubiquitous software. One can envision typing a dot point in PowerPoint and the smart system will create a number of slides. The PowerPoint user fiddles with the words and graphics and rushes to make a pitch at a conference or a recession-proof venture capital firm.
Imagine a Microsoft application which launches ChatGPT-type of smart search in a Word document. This type of function might be useful to crypto bros who want to explain how virtual tokens will become the Yellow Brick Road to one of the seven cities of Cibola. Sixth graders writing an essay and MBAs explaining how their new business will dominate a market will find this type of functionality a must-have. No LibreOffice build offers this type of value…yet.
What if one thinks about Outlook? (I wou8ld prefer not to know anything about Outlook, but there are individuals who spend hours each day fiddling around in email. Writing email can become a task for a ChatGPT-like software. Spammers will love this capability, particularly combined with VBScript.
The ultimate, of course, will be the integration of Teams and ChatGPT. The software can generate an instance of a virtual person and the search function can generate responses to questions directed at the construct presented to others in a Teams’ session. This capability is worth big bucks.
Let’s step back from the fantasies of killing Google and making Microsoft Office apps interesting.
Microsoft faces a handful of challenges. (I will not mention Microsoft’s excellent judgment in referencing the Federal Trade Commission as unconstitutional. Such restraint.)
First, the company has a somewhat disappointing track record in enterprise security. Enough said.
Second, Microsoft has a fascinating series of questionable engineering decisions. One example is the weirdness of old code in Windows 11. Remember that Windows 10 was to be the last version of Windows. Then there is the chaos of updates to Windows 11, particularly missteps like making printing difficult. Again enough said.
Third, Google has its own smart software. Either Mr. Raghavan is asleep at the switch and missed the signal from Microsoft’s 2019 one billion dollar investment in OpenAI or Google’s lawyers have stepped on the smart software brake. Who owns outputs built from the content of Web sites? What happens when content the European Union appears in outputs? (You know the answer to that question. I think it is even bigger fines which will make Facebook’s recent half a billion dollar invoice look somewhat underweight.)
When my research team and I talked about the You.com-type search and the use of ChatGPT or other OpenAI technology in business, law enforcement, legal, healthcare, and other use cases — we hypothesized that:
- Time will be required to get the gears and wheels working well enough to deliver consistently useful outputs
- Google has responded and no one noticed much except infinite scrolling and odd “cards” of allegedly accurate information in response to a user’s query.
- Legal issues will throw sand in the gears of the machinery once the ambulance chasers tire of Camp Lejeune litigation
- Aligning costs of resources with the to-be revenue will put some potholes on this off-ramp of the information superhighway.
Net net: The world of online services is often described as being agile. A company can turn on a dime. New products and services can be issued and fixes can be a system better over time. I know Boolean works. The ChatGPT thing seems promising. I don’t know if it replaces human thought and actions in certain use cases. Assume you have cancer. Do you want your oncologist to figure out what to do using Bing.com, Google.com, or You.com?
Stephen E Arnold, January 9, 2023
Smart Software: Just One Real Problem? You Wish
January 6, 2023
I read “The One Real Problem with Synthetic Media.” when consulting and publishing outfits point out the “one real problem” analysis, I get goose bumps. Am I cold? Nah, I am frightened. Write ups that propose the truth frighten me. Life is — no matter what mid tier consulting outfits say — slightly more nuanced.
What is the one real problem? The write up asserts:
Don’t use synthetic media for your business in any way. Yes, use it for getting ideas, for learning, for exploration. But don’t publish words or pictures generated by AI — at least until there’s a known legal framework for doing so. AI-generated synthetic media is arguably the most exciting realm in technology right now. Some day, it will transform business. But for now, it’s a legal third rail you should avoid.
What’s the idea behind the shocking metaphor? The third rail provides electric power to a locomotive. I think the idea is that one will be electrocuted should an individual touch a live third rail.
Okay.
Are there other issues beyond the legal murkiness?
Yes, let me highlight several which strike me as important.
First, the smart software can output quickly and economically weaponized information. Whom can one believe? A college professor funded by a pharmaceutical company or a robot explaining the benefits of an electric vehicle? The hosing of synthetic content and data into a society may provide more corrosive than human outputs alone. Many believe that humans are expert misinformation generators. I submit that smart software will blow the doors off the human content jalopies.
Second, smart software ingests data, when right or wrong, human generated or machine generated, and outputs results on these data. What happens when machine generated content makes the human generated content into tiny rivulets? The machine output is as formidable as Hokusai’s wave. Those humans in the boats: Goners perhaps?
Third, my thought is that in some parts of the US the slacker culture is the dominant mode. Forget that crazy, old-fashioned industrial revolution 9-to-5 work day. Ignore the pressure to move up, earn more, and buy a Buick, not a Chevrolet. Slacker culture dwellers look for the easy way to accomplish what they want. Does this slacker thing explain some FTX-type behavior? What about Amazon’s struggles with third-party resellers’ products? What about Palantir Technology buying advertising space in the Wall Street Journal to convince me that it is the leader in smart software? Yeah, slacker stuff in my opinion. These examples and others mean that the DALL-E and ChatGPT type of razzle dazzle will gain traction.
Where are legal questions in these three issues? Sure legal eagles will fly when there is an opportunity to bill.
I think the smart software thing is a good example of “technology is great” thinking. The one real problem is that it is not.
Stephen E Arnold, January 6, 2023
Microsoft Reveals Its Engineering Approach: Good Enough
January 5, 2023
I was amused to read “State of the Windows: How Many Layers of UI Inconsistencies Are in Windows 11?” We have Windows 11 running on one computer in my office. The others are a lone Windows 7, four Windows 10 computers, four Mac OS machines with different odd names like High Sierra, and two Linux installations with even quirkier names. Sigh.
The article does a masterful job of pointing out that vestiges of XP, Vista, Windows 7, and Windows 8 lurk within the Windows 11 system. I have shared my opinion that Microsoft pushed Windows 11 out to customers to deflect real journalists’ attention from the security wild fire blazing in SolarWinds. Few share my viewpoint. That’s okay. I have been around a long time, and I have witnessed some remarkable executive group think when a crisis threatens to engulf a bonus. Out she goes.
But the article makes very, very clear how Microsoft approaches the engineering of its software and systems. Think of a lousy cake baked for your 12th birthday. To hide the misshapen, mostly inedible mess, someone has layered on either Betty Croker-type frosting in a can and added healthy squirts of synthetic whipped “real” cream. “Real,” of course, means that it squeaked through the FDA review process. Good enough.
Here’s one example of the treasures within Windows 11. I quote:
The Remote Desktop Connection program is still exactly the same as it was 14 years ago, complete with Aero icons and skeuomorphic common controls.
Priorities? Sure, just not engineering excellence, attention to detail, or consistency in what the user sees.
Do I think this approach is used for Azure and Exchange security?
Now the key questions, “What engineering approach will Microsoft use as it applies smart large language models to Web search?”
Stephen E Arnold, January 5, 2022