Balloons, Hands Off Virtual Services, and Enablers: Technology Shadows and Ghosts
December 30, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Earlier this year (2023) I delivered a lecture called “Ghost Web.” I defined the term, identified what my team and I call “enablers,” and presented several examples. These included a fan of My Little Pony operating Dark Web friendly servers, a non-governmental organization pitching equal access, a disgruntled 20 something with a fixation on adolescent humor, and a suburban business executive pumping adult content to anyone able to click or swipe via well-known service providers. These are examples of enablers.
Enablers are accommodating. Hear no evil, see no evil, admit to knowing nothing is the mantra. Thanks, MSFT Copilot Bing thing.
Figuring out the difference between the average bad guy and a serious player in industrialized cyber crime is not easy. Here’s another possible example of how enablers facilitate actions which may be orthogonal to the interests of the US and its allies. Navigate to “U.S. Intelligence Officials Determined the Chinese Spy Balloon Used a U.S. Internet Provider to Communicate.” The report may or may not be true, but the scant information presented lines up with my research into “enablers.” (These are firms which knowingly set up their infrastructure services to allow the customer to control virtual services. The idea is that the hosting vendor does nothing but process the credit card, bank transfer, crypto, or other accepted form of payment. Done. The customer or the sys admin for the actor does the rest: Spins up the servers, installs necessary software, and operates the service. The “enabler” just looks at logs and sends bills.
Enablers are aware that their virtual infrastructure makes it easy for a customer to operate in the shadows. Look up a url and what do you find? Missing information due to privacy regulations like those in Western Europe or an obfuscation service offered by the “enabler.” Explore the urls using an appropriate method and what do you find? Dead ends. What happens when a person looks into an enabling hosting provider? Looks of confusion because the mechanism does not know if the customers are “real”? Stuff is automatic. The blank looks reflect the reality that at certain enabling ISPs, no one knows because no one wants to know. As long as the invoice is paid, the “enabler” is a happy camper.
What’s the NBC News report say?
U.S. intelligence officials have determined that the Chinese spy balloon that flew across the U.S. this year used an American internet service provider to communicate, according to two current and one former U.S. official familiar with the assessment.
The “American Internet Service Provider” is an enabler. Neither the write up nor an “official” is naming the alleged enabler. I want to point out that there many firms are in the enabling business. I will not identify by name these outfits, but I can characterize the types of outfits my team and I have identified. I will highlight three for this free, public blog post:
- A grifter who sets up an ISP and resells services. Some of these outfits have buildings and lease machines; others just use space in a very large utility ISP. The enabling occurs because of what we call the Russian doll set up. A big outfit allows resellers to brand an ISP service and pay a commission to the company with the pings, pipes, and other necessaries.
- An outright criminal no longer locked up sets up a hosting operation in a country known to be friendly to technology businesses. Some of these are in nation states with other problems on their hands and lack the resources to chase what looks like a simple Web hosting operation. Other variants include known criminals who operate via proxies and focus on industrialized cyber crime in different flavors.
- A business person who understands enough about technology to hire and compensate engineers to build a “ghost” operation. One such outfit diverted itself of a certain sketchy business when the holding company sold what looked like a “plain vanilla” services firm. The new owner figured out what was going on and sold the problematic part of the business to another party.
There are other variants.
The big question is, “How do these outfits remain in business?” My team and I identified a number of reasons. Let me highlight a handful because this is, once again, a free blog and not a mechanism for disseminating information reserved for specialists:
The first is that the registration mechanism is poorly organized, easily overwhelmed, and without enforcement teeth. As a result, it is very easy to operate a criminal enterprise, follow the rules (such as they are), and conduct whatever online activities desired with minimal oversight. Regulation of the free and open Internet facilitates enablers.
The second is that modern methods and techniques make it possible to set up an illegal operation and rely on scripts or semi-smart software to move the service around. The game is an old one, and it is called Whack A Mole. The idea is that when investigators arrive to seize machines and information, the service is gone. The account was in the name of a fake persona. The payments arrived via a bogus bank account located in a country permitting opaque banking operations. No one where physical machines are located paid any attention to a virtual service operated by an unknown customer. Dead ends are not accidental; they are intentional and often technical.
The third is that enforcement personnel have to have time and money to pursue the bad actors. Some well publicized take downs like the Cyberbunker operation boil down to a mistake made by the owner or operator of a service. Sometimes investigators get a tip, see a message from a disgruntled employee, or attend a hacker conference and hear a lecturer explain how an encrypted email service for cyber criminals works. The fix, therefore, is additional, specialized staff, technical resources, and funding.
What’s the NBC News’s story mean?
Cyber crime is not just a lone wolf game. Investigators looking into illegal credit card services find that trails can lead to a person in prison in Israel or to a front company operating via the Seychelles using a Chinese domain name registrar with online services distributed around the world. The problem is like one of those fancy cakes with many layers.
How accurate is the NBC News report? There aren’t many details, but it a fact that enablers make things happen. It’s time for regulatory authorities in the US and the EU to put on their Big Boy pants and take more forceful, sustained action. But that’s just my opinion about what I call the “ghost Web,” its enablers, and the wide range of criminal activities fostered, nurtured, and operated 24×7 on a global basis.
When a member of your family has a bank account stripped or an identity stolen, you may have few options for a remedy. Why? You are going to be chasing ghosts and the machines which make them function in the real world. What’s your ISP facilitating?
Stephen E Arnold, December 30, 2023
Scale Fail: Define Scale for Tech Giants, Not Residents of Never Never Land
December 29, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read “Scale Is a Trap.” The essay presents an interesting point of view, scale from the viewpoint of a resident of Never Never Land. The write up states:
But I’m pretty convinced the reason these sites [Vice, Buzzfeed, and other media outfits] have struggled to meet the moment is because the model under which they were built — eyeballs at all cost, built for social media and Google search results — is no longer functional. We can blame a lot of things for this, such as brand safety and having to work through perhaps the most aggressive commercial gatekeepers that the world has ever seen. But I think the truth is, after seeing how well it worked for the tech industry, we made a bet on scale — and then watched that bet fail over and over again.
The problem is that the focus is on media companies designed to surf on the free megaphones like Twitter and the money from Google’s pre-threat ad programs.
However, knowledge is tough to scale. The firms which can convert knowledge into what William James called “cash value” charge for professional services. Some content is free like wild and crazy white papers. But the “good stuff” is for paying clients.
Outfits which want to find enough subscribers who will pay the necessary money to read articles is a difficult business to scale. I find it interesting that Substack is accepting some content sure to attract some interesting readers. How much will these folks pay. Maybe a lot?
But scale in information is not what many clever writers or traditional publishers and authors can do. What happens when a person writes a best seller. The publisher demands more books and the result? Subsequent books which are not what the original was.
Whom does scale serve? Scale delivers power and payoff to the organizations which can develop products and services that sell to a large number of people who want a deal. Scale at a blue chip consulting firm means selling to the biggest firms and the organizations with the deepest products.
But the scale of a McKinsey-type firm is different from the scale at an outfit like Microsoft or Google.
What is the definition of scale for a big outfit? The way I would explain what the technology firms mean when scale is kicked around at an artificial intelligence conference is “big money, big infrastructure, big services, and big brains.” By definition, individuals and smaller firms cannot deliver.
Thus, the notion of appropriate scale means what the cited essay calls a “niche.” The problems and challenges include:
- Getting the cash to find, cultivate, and grow people who will pay enough to keep the knowledge enterprise afloat
- Finding other people to create the knowledge value
- Protecting the idea space from carpetbaggers
- Remaining relevant because knowledge has a shelf life, and it takes time to grow knowledge or acquire new knowledge.
To sum up, the essay is more about how journalists are going to have to adapt to a changing world. The problem is that scale is a characteristic of the old school publishing outfits which have been ill-suited to the stress of adapting to a rapidly changing world.
Writers are not blue chip consultants. Many just think they are.
Stephen E Arnold, December 29, 2023
AI Silly Putty: Squishes Easily, Impossible to Remove from Hair
December 29, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I like happy information. I navigated to “Meta’s Chief AI Scientist Says Terrorists and Rogue States Aren’t Going to Take Over the World with Open Source AI.” Happy information. Terrorists and the Axis of Evil outfits are just going to chug along. Open source AI is not going to give these folks a super weapon. I learned from the write up that the trustworthy outfit Zuckbook has a Big Wizard in artificial intelligence. That individual provided some cheerful words of wisdom for me. Here’s an example:
It won’t be easy for terrorists to takeover the world with open-source AI.
Obviously there’s a caveat:
they’d need a lot money and resources just to pull it off.
That’s my happy thought for the day.
“Wow, getting this free silly putty out of your hair is tough,” says the scout mistress. The little scout asks, “Is this similar to coping with open source artificial intelligence software?” Thanks, MSFT Copilot. After a number of weird results, you spit out one that is good enough.
Then I read “China’s Main Intel Agency Has Reportedly Developed An AI System To Track US Spies.” Oh, oh. Unhappy AI information. China, I assume, has the open source AI software. It probably has in its 1.4 billion population a handful of AI wizards comparable to the Zuckbook’s line up. Plus, despite economic headwinds, China has money.
The write up reports:
The CIA and China’s Ministry of State Security (MSS) are toe to toe in a tense battle to beat one another’s intelligence capabilities that are increasingly dependent on advanced technology… , the NYT reported, citing U.S. officials and a person with knowledge of a transaction with contracting firms that apparently helped build the AI system. But, the MSS has an edge with an AI-based system that can create files near-instantaneously on targets around the world complete with behavior analyses and detailed information allowing Beijing to identify connections and vulnerabilities of potential targets, internal meeting notes among MSS officials showed.
Not so happy.
Several observations:
- The smart software is a cat out of the bag
- There are intelligent people who are not pals of the US who can and will use available tools to create issues for a perceived adversary
- The AI technology is like silly putty: Easy to get, free or cheap, and tough to get out of someone’s hair.
What’s the deal with silly putty? Cheap, easy, and tough to remove from hair, carpet, and seat upholstery. Just like open source AI software in the hands of possibly questionable actors. How are those government guidelines working?
Stephen E Arnold, December 29, 2023
The American Way: Loose the Legal Eagles! AI, Gray Lady, AI.
December 29, 2023
This essay is the work of a dumb dinobaby. No smart software required.
With the demands of the holidays, I have been remiss in commenting upon the festering legal sores plaguing the “real” news outfits. Advertising is tough to sell. Readers want some stories, not every story. Subscribers churn. The dead tree version of “real” news turn yellow in the windows of the shrinking number of bodegas, delis, and coffee shops interested in losing floor space to “real” news displays.
A youthful senior manager enters Dante’s fifth circle of Hades, the Flaming Legal Eagles Nest. Beelzebub wishes the “real” news professional good luck. Thanks, MSFT Copilot, I encountered no warnings when I used the word “Dante.” Good enough.
Google may be coming out of the dog training school with some slightly improved behavior. The leash does not connect to a shock collar, but maybe the courts will provide curtail some of the firm’s more interesting behaviors. The Zuckbook and X.com are news shy. But the smart software outfits are ripping the heart out of “real” news. That hurts, and someone is going to pay.
Enter the legal eagles. The target is AI or smart software companies. The legal eagles says, “AI, gray lady, AI.”
How do I know? Navigate to “New York Times Sues OpenAI, Microsoft over Millions of Articles Used to Train ChatGPT.” The write up reports:
The New York Times has sued Microsoft and OpenAI, claiming the duo infringed the newspaper’s copyright by using its articles without permission to build ChatGPT and similar models. It is the first major American media outfit to drag the tech pair to court over the use of stories in training data.
The article points out:
However, to drive traffic to its site, the NYT also permits search engines to access and index its content. "Inherent in this value exchange is the idea that the search engines will direct users to The Times’s own websites and mobile applications, rather than exploit The Times’s content to keep users within their own search ecosystem." The Times added it has never permitted anyone – including Microsoft and OpenAI – to use its content for generative AI purposes. And therein lies the rub. According to the paper, it contacted Microsoft and OpenAI in April 2023 to deal with the issue amicably. It stated bluntly: "These efforts have not produced a resolution."
I think this means that the NYT used online search services to generate visibility, access, and revenue. However, it did not expect, understand, or consider that when a system indexes content, that content is used for other search services. Am I right? A doorway works two ways. The NYT wants it to work one way only. I may be off base, but the NYT is aggrieved because it did not understand the direction of AI research which has been chugging along for 50 years.
What do smart systems require? Information. Where do companies get content? From online sources accessible via a crawler. How long has this practice been chugging along? The early 1990s, even earlier if one considers text and command line only systems. Plus the NYT tried its own online service and failed. Then it hooked up with LexisNexis, only to pull out of the deal because the “real” news was worth more than LexisNexis would pay. Then the NYT spun up its own indexing service. Next the NYT dabbled in another online service. Plus the outfit acquired About.com. (Where did those writers get that content?” I know the answer, but does the Gray Lady remember?)
Now with the success of another generation of software which the Gray Lady overlooked, did not understand, or blew off because it was dealing with high school management methods in its newsroom — now the Gray Lady has let loose the legal eagles.
What do I make of the NYT and online? Here are the conclusions I reached working on the Business Dateline database and then as an advisor to one of the NYT’s efforts to distribute the “real” news to hotels and steam ships via facsimile:
- Newspapers are not very good at software. Hey, those Linotype machines were killers, but the XyWrite software and subsequent online efforts have demonstrated remarkable ways to spend money and progress slowly.
- The smart software crowd is not in touch with the thought processes of those in senior management positions in publishing. When the groups try to find common ground, arguments over who pays for lunch are more common than a deal.
- Legal disputes are expensive. Many of those engaged reach some type of deal before letting a judge or a jury decide which side is the winner. Perhaps the NYT is confident that a jury of its peers will find the evil AI outfits guilty of a range of heinous crimes. But maybe not? Is the NYT a risk taker? Who knows. But the NYT will pay some hefty legal bills as it rushes to do battle.
Net net: I find the NYT’s efforts following a basic game plan. Ask for money. Learn that the money offered is less than the value the NYT slaps on its “real” news. The smart software outfit does what it has been doing. The NYT takes legal action. The lawyer engage. As the fees stack up, the idea that a deal is needed makes sense.
The NYT will do a deal, declare victory, and go back to creating “real” news. Sigh. Why? Microsoft has more money and can tie up the matter in court until Hell freezes over in my opinion. If the Gray Lady prevails, chalk up a win. But the losers can just up their cash offer, and the Gray Lady will smile a happy smile.
Stephen E Arnold, December 29, 2023
A Dinobaby Misses Out on the Hot Searches of 2023
December 28, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I looked at “Year in Search 2023.” I was surprised at how out of the flow of consumer information I was. “Out of the flow” does not not capture my reaction to the lists of the news topics, dead people, and songs I was. Do you know much about Bizarrap? I don’t. More to the point, I have never heard of the obviously world-class musician.
Several observations:
First, when people tell me that Google search is great, I have to recalibrate my internal yardsticks to embrace queries for entities unrelated to my microcosm of information. When I assert that Google search sucks, I am looking for information absolutely positively irrelevant to those seeking insight into most of the Google top of the search charts. No wonder Google sucks for me. Google is keeping pace with maps of sports stadia.
Second, as I reviewed these top searches, I asked myself, “What’s the correlation between advertisers’ spend and the results on these lists? My idea is that a weird quantum linkage exists in a world inhabited by incentivized programmers, advertisers, and the individuals who want information about shirts. Its the game rigged? My hunch is, “Yep.” Spooky action at a distance I suppose.
Third, from the lists substantive topics are rare birds. Who is looking for information about artificial intelligence, precision and recall in search, or new approaches to solving matrix math problems? The answer, if the Google data are accurate and not a come on to advertisers, is almost no one.
As a dinobaby, I am going to feel more comfortable in my isolated chamber in a cave of what I find interesting. For 2024, I have steeled myself to exist without any interest in Ginny & Georgia, FIFTY FIFTY, or papeda.
I like being a dinobaby. I really do.
Stephen E Arnold, December 28, 2023
Want to Fix Technopoly Life? Here Is a Plan. Implement It. Now.
December 28, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Cal Newport published an interesting opinion essay in New Yorker Magazine called “It Is Time to Dismantle the Technopoly.” The point upon which I wish to direct my dinobaby microscope appears at the end of the paywalled artistic commentary. Here’s the passage:
We have no other reasonable choice but to reassert autonomy over the role of technology in shaping our shared story.
The author or a New Yorker editor labored over this remarkable sentence.
First, I want to point out that there is a somewhat ill-defined or notional “we”. Okay, exactly who is included in the “we.” I would suggest that the “technopoly” is excluded. The title of the article makes clear that dismantle means taking apart, disassembling, or deconstructing. How will that be accomplished in a nation state like the US? What about the four entities in the alleged “Axis of Evil”? Are there other social constructs like an informal, distributed group of bad actors who want to make smart software available to anyone who wants to mount phishing and ransomware attacks? Okay, that’s the we problem, not too tiny is it?
A teacher explains to her top students that they have an opportunity to define some interesting concepts. The students do not look too happy. As the students grow older, their interest is therapist jargon may increase. The enthusiasm for defining such terms remains low. Thanks, MSFT Copilot.
Second, “no other reasonable choice.” I think you know where I am going with my next question: What does “reasonable” mean? I think the author knows or hopes that the “we” will recognize “reasonable” when those individuals see it. But reason is slippery, particularly in an era in which literacy is defined as being able to “touch and pay” and “swiping left.” What is the computing device equipped with good enough smart software “frames” an issue? How does one define “reasonable” if the information used to make that decision is weaponized, biased, or defined by a system created by the “technopoly”? Who other than lawyers wants to argue endlessly over an epistemological issue? Not me. The “reasonable” is pulled from the same word list used by some of the big technology outfits. Isn’t Google reasonable when it explains that it cares about the user’s experience? What about Meta (the Zuckbook) and its crystal clear explanations of kiddie protections on its services? What about the explanations of legal experts arguing against one another? The word “reasonable” strikes me as therapist speak or mother-knows-best talk.
Third, the word “reassert” suggests that it is time to overthrow the technopoly. I am not sure a Boston Tea Party-type event will do the trick. Technology, particularly open source software, makes it easy for a bad actor working from a beat down caravan near Makarska can create a new product or service that sweeps through the public network. How is “reassert” going to cope with an individual hooked into an online, distributed criminal network. Believe me, Europol is trying, but the work is difficult. But the notion of “reassert” implies that there was a prior state, a time when technopolists were not the focal point of “The New Yorker.” “Reassert” is a call to action. The who, how, when, and where questions are not addressed. The result is crazy rhetoric which, I suppose, might work if one were a TikTok influencer backed by a large country’s intelligence apparatus. But that might not work either. The technopolies have created the datasphere, and it is tough to grab a bale of tea and pitch it in the Boston Harbor today. “Heave those bits overboard, mates” won’t work.
Fourth “autonomy.” I am not sure what “autonomy” means. When I was taking required classes at the third-rate college I attended, I learned the definition each instructor presented. Then, like a good student chasing top marks, I spit the definition back. Bingo. The method worked remarkably well. The notion of “autonomy” dredges upon explanations of free will and predestination. “Autonomy” sounds like a great idea to some people. To me, it smacks of ideas popular when Ben Franklin was chasing females through French doors before he was asked to return to the US of A. YouTube is chock-a-block with off-the-grid methods. Not too many people go off the grid and remain there. When someone disappears, it becomes “news.” And the person or the entity’s remains become an anecdote on a podcast. How “free” is a person in the US to “dismantle” a public or private enterprise? Can one “dismantle” a hacker? Remember those homeowners who put bullets in an intruder and found themselves in jail? Yeah. Autonomy. How’s that working out in other countries? What about the border between French Guyana and Brazil? Do something wrong and the French Foreign Legion will define “autonomy” in terms of a squad solving a problem. Bang. Done. Nice idea that “autonomy” stuff.
Fifth, the word “role” is interesting. I think of “role” as a character in a social setting; for example, a CEO who is insecure about how he or she actually became a CEO. That individual tries to play a “role.” A character like the actor who becomes “Mr. Kitzel” on a Jack Benny Show plays a role. The talking heads on cable news play a “role.” Technology enables, it facilitates, and it captivates. I suppose that’s its “role.” I am not convinced. Technology does what it does because humans have shaped a service, software, or system to meet an inner need of a human user. Technology is like a gerbil. Look away and there are more and more little technologies. Due to human actions, the little technologies grow and then the actions of lots of human make the technologies into digital behemoths. But humans do the activating, not the “technology.” The twist with technology is that as it feeds on human actions, the impact of the two interacting is tough to predict. In some cases, what happens is tough to explain as that action is taking place. A good example is the role of TikTok in shaping the viewpoints of some youthful fans. “Role” is not something I link directly to technology, but the word implies some sort of “action.” Yeah, but humans were and are involved. The technology is perhaps a catalyst or digital Teflon. It is not Mr. Kitzel.
Sixth, the word “shaping” in the cited sentence directly implies that “technology” does something. It has intent. Nope. The humans who control or who have unrestricted access to the “technology” do the shaping. The technology — sorry, AI fans — is following instructions. Some instructions come from a library; others can be cooked up based on prior actions. But for most technology technology is inanimate and “smart” to uninformed people. It is not shaping anything unless a human set up the system to look for teens want to commit suicide and the software identifies similar content and displays it for the troubled 13 year old. But humans did the work. Humans shape, distort, and weaponize. The technology is putty composed of zeros and ones. If I am correct, the essay wants to terminate humans. Once these bad actors are gone, the technology “problem” goes away. Sounds good, right?
Finally, the word “shared story.” What is this “shared story”? The commentary on a spectacular shot to win a basketball game? A myth that Thomas Jefferson was someone who kept his trousers buttoned? The story of a Type A researcher who experimented with radium and ended up a poster child for radiation poisoning? An influencer who escaped prison and became a homeless minister caring for those without jobs and a home? The “shared story” is a baffler. My hunch is that “shared story” is something that the “we” are sad has disappeared. My family was one of the group that founded Hartford, Connecticut, in the 17th century. Is that the Arnolds’ shared story. News flash: There are not many Arnolds left and those who remain laugh we I “share” that story. It means zero to them. If you want a “shared story”, go viral on YouTube or buy Super Bowl ads. Making friends with Taylor Swift will work too.
Net net: The mental orientation of the cited essay is clear in one sentence. Yikes, as the honor students might say.
Stephen E Arnold, December 28, 2023
Why Stuff No Longer Works Very Well
December 28, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Own a Tesla? What about those Southwest flight delays? Been to a hospital emergency room in DC? Tried to get a plumber on a holiday? Yep, systems work … sometimes, sort of, or mostly. Have you ever wondered why teens working at a fruit market cannot make change, recognize a fifty cent piece, or know zero about when the grapes were put on display?
I think I have found the answer to these and other questions about modern life. Navigate to “Become an Expert in Less Than an Hour.” The write up is a how to be superficially smart. Now, don’t get me wrong, superficiality is an important characteristic. People decide whether a person is okay or not in seconds, maybe less. Impressing a person to whom one is selling a used car relies on that instant charm feature of some people. The skill of superficial smartness is important to those who want to pick up a person of interest in a bar, a consultant at a blue chip firm, a lawyer explaining his fees to a trust customer, and political advisors who shift from art history to geopolitics over lunch.
The write up reduces superficial intelligence to a cook book, and I think quite a few people will find the ideas in the essay of considerable value. Here’s an example:
“anthropologists frequently have to learn how to grok an entire subfield in under an hour. Yes, real expertise takes years of hard work, but identifying the key works and ideas that define a subfield can be done quickly if you know where to look.”
Perfect.
Stephen E Arnold, December 28, 2023
Quantum Management: The Google Method
December 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read a story (possibly sad or at least bittersweet) in Inc. Magazine. “Google Fired 12,000 Employees. A Year Later, the CEO Says It Was the Right Call, Just Done in the Wrong Way” asks an interesting question of a company which has triggered a number of employee-related actions. From protests to stochastic parrots, the Google struggles to tailor its management methods to the people it hires.
What happens when high school science club engineering is applied to modern tasks? Some projects fall down. Hello, San Francisco, do you have a problem with a certain big building? Thanks, MSFT Copilot. Good enough.
The story reports:
A few days ago, Google’s CEO Sundar Pichai openly acknowledged that the way Google managed the layoff of 12,000 employees, about 6 percent of its workforce, was not done right…. Initially, Google’s stance on the layoffs was presented as a strategic necessity, a move to streamline operations and focus on crucial business areas…. Pichai’s frank admission that the process could have been handled differently is a notable shift from the company’s earlier justifications??.
What I think this means is that Google’s esteemed leader made a somewhat typical decision for a person imbued with some of the philosophy of a non-Western culture. In 2023, Google has lurched from Red Alert to Red Alert. In January 2023, Microsoft seized the marketing initiative in the lucrative world of enterprise artificial intelligence. And what about some of Google’s AI demonstrations? Yeah, some were edited and tweaked to be more Googley. Then after a couple of high profile legal cases went against the company, Sundar Pichai has allegedly admitted that he has made some errors.
No kidding. Like the architect engineers of the Florida high rise which collapsed to ruin the day of a number of people, mistakes were made. I suppose San Francisco’s Millennium Tower could topple over the holidays. That event would pull some eyeballs off the online advertising company.
The sad reality is that Google’s senior management is pushing buttons and getting poor results. The Inc. Magazine article ends this way:
The key questions moving forward are: Will Google face any repercussions for the way it handled the layoffs? What concrete actions will the company take to improve communication and support for its employees, both those who were let go and those who remain? And, importantly, how will this experience shape Google’s, and potentially other companies’, approach to workforce management in the future?
Questions, just not the right one. In my opinion, Google’s Board of Directors may want to ask:
Is it time to big adieu to Sundar Pichai and his expensive hires? With the current team in place, Google’s core business model at risk from ChatGPT-type findability services, legal eagles hovering over the company, and now a public admission that firing 12,000 wizards by email was a mistake, I ask, “What’s next, Sundar?”
Net net: The company’s management method (which reminds me of how my high school science club solved problems) is showing signs of cracking and crumbling in my opinion.
Stephen E Arnold, December 27, 2023
AI Risk: Are We Watching Where We Are Going?
December 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
To brighten your New Year, navigate to “Why We Need to Fear the Risk of AI Model Collapse.” I love those words: Fear, risk, and collapse. I noted this passage in the write up:
When an AI lives off a diet of AI-flavored content, the quality and diversity is likely to decrease over time.
I think the idea of marrying one’s first cousin or training an AI model on AI-generated content is a bad idea. I don’t really know, but I find the idea interesting. The write up continues:
Is this model at risk of encountering a problem? Looks like it to me. Thanks, MSFT Copilot. Good enough. Falling off the I beam was a non-starter, so we have a more tame cartoon.
Model collapse happens when generative AI becomes unstable, wholly unreliable or simply ceases to function. This occurs when generative models are trained on AI-generated content – or “synthetic data” – instead of human-generated data. As time goes on, “models begin to lose information about the less common but still important aspects of the data, producing less diverse outputs.”
I think this passage echoes some of my team’s thoughts about the SAIL Snorkel method. Googzilla needs a snorkel when it does data dives in some situations. The company often deletes data until a legal proceeding reveals what’s under the company’s expensive, smooth, sleek, true blue, gold trimmed kimonos
The write up continues:
There have already been discussions and research on perceived problems with ChatGPT, particularly how its ability to write code may be getting worse rather than better. This could be down to the fact that the AI is trained on data from sources such as Stack Overflow, and users have been contributing to the programming forum using answers sourced in ChatGPT. Stack Overflow has now banned using generative AIs in questions and answers on its site.
The essay explains a couple of ways to remediate the problem. (I like fairy tales.) The first is to use data that comes from “reliable sources.” What’s the definition of reliable? Yeah, problem. Second, the smart software companies have to reveal what data were used to train a model. Yeah, techno feudalists totally embrace transparency. And, third, “ablate” or “remove” “particular data” from a model. Yeah, who defines “bad” or “particular” data. How about the techno feudalists, their contractors, or their former employees.
For now, let’s just use our mobile phone to access MSFT Copilot and fix our attention on the screen. What’s to worry about? The person in the cartoon put the humanoid form in the apparently risky and possibly dumb position. What could go wrong?
Stephen E Arnold, December 27, 2023
Google Gobbles Apple Alums
December 27, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Technology companies are notorious for poaching employees from one other. Stealing employees is so common that business experts have studied it for years. One of the more recent studies concentrates on the destination of ex-Apple associates as told by PC Magazine: “Apple Employees Leave For Google More Than Any Other Company.”
Switch on Business investigated LinkedIn data to determine which tech giants poach the industry’s best talent. All of the big names were surveyed: Uber, Intel, Adobe, Salesforce, Nvidia, Netflix, Oracles, Tesla, IBM, Microsoft, Meta, Apple, Amazon, and Google. The study mainly focused on employees working at the aforementioned names and if they switched to another listed company.
Meta had the highest proportion of any of the tech giants with 26.51% of employees having worked at rival. Google had the most talent by volume with 24.15%. IBM stole the least employees at 2.28%. Apple took 5.7% of its competitions’ talent and that comes with some drama. Apple used to purchase Intel chips for its products then the company recently decided to build its own chips. They hired 2000 people away from Intel.
The most interesting factoids are the patterns found in employee advancements:
“Potentially surprising is the fact that Apple employees are twice as likely to make the move to Google from Apple than the next biggest post-Apple destination, Amazon. After Amazon, Apple employees make the move to Meta, followed by Microsoft, Tesla, Nvidia, Salesforce, Adobe, Intel, and Oracle.
As for where Apple employees come from, new Apple employees are most likely to enter the company from Intel, followed by Microsoft, Amazon, Google, IBM, Oracle, Tesla, Nvidia, Adobe, and Meta.
While Apple employees are most often headed to Google, Google employees are most often headed to Meta, Microsoft, and Amazon, with Apple only making it to fourth on the list.”
It sounds like a hiring game of ring-around-the-rosy. Unless the employees retire, they’ll eventually make it back to their first company.
Whitney Grace, December 25, 2023