What Techno-Optimism Seems to Suggest (Oligopolies, a Plutocracy, or Utopia)
February 23, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Science and mathematics are comparable to religion. These fields of study attract acolytes who study and revere associated knowledge and shun nonbelievers. The advancement of modern technology is its own subset of religious science and mathematics combined with philosophical doctrine. Tech Policy Press discusses the changing views on technology-based philosophy in: “Parsing The Political Project Of Techno-Optimism.”
Rich, venture capitalists Marc Andreessen and Ben Horowitz are influential in Silicon Valley. While they’ve shaped modern technology with their investments, they also tried drafting a manifesto about how technology should be handled in the future. They “creatively” labeled it the “techno-optimist manifesto.” It promotes an ideology that favors rich people increasing their wealth by investing in politicians that will help them achieve this.
Techno-optimism is not the new mantra of Silicon Valley. Reception didn’t go over well. Andreessen wrote:
“Techno-Optimism is a material philosophy, not a political philosophy…We are materially focused, for a reason – to open the aperture on how we may choose to live amid material abundance.”
He also labeled this section, “the meaning of life.”
Techno-optimism is a revamped version of the Californian ideology that reigned in the 1990s. It preached that the future should be shaped by engineers, investors, and entrepreneurs without governmental influence. Techno-optimism wants venture capitalists to be untaxed with unregulated portfolios.
Horowitz added his own Silicon Valley-type titbit:
“‘…will, for the first time, get involved with politics by supporting candidates who align with our vision and values specifically for technology. (…) [W]e are non-partisan, one issue voters: if a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them.’”
Horowitz and Andreessen are giving the world what some might describe as “a one-finger salute.” These venture capitalists want to do whatever they want wherever they want with governments in their pockets.
This isn’t a new ideology or a philosophy. It’s a rebranding of socialism and fascism and communism. There’s an even better word that describes techno-optimism: Plutocracy. I am not sure the approach will produce a Utopia. But there is a good chance that some giant techno feudal outfits will reap big rewards. But another approach might be to call techno optimism a religion and grab the benefits of a tax exemption. I wonder if someone will create a deep fake of Jim and Tammy Faye? Interesting.
Whitney Grace, February 23, 2023
Security Debt: So Just Be a Responsible User / Developer
February 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Security appears to be one of the next big things. Smart software strapped onto to cyber safeguard systems is a no-lose proposition for vendors. Does it matter that bolted on AI may not work? Nope. The important point is to ride the opportunity wave.
What’s interesting is that security is becoming a topic discussed at 75-something bridge groups and at lunch gatherings in government agencies concerned about fish and trees. Can third-party security services, grandmothers chasing a grand slam, or an expert in river fowl address security problems? I would suggest that the idea that security is the user’s responsibility is an interesting way to dodge responsibility. The estimable 23andMe tried this play, and I am not too sure that it worked.
Can security debt become the invisible hand creating opportunities for bad actors? Has the young executive reached the point of no return for a personal debt crisis? Thanks, MSFT Pilot Bing for a good enough illustration.
Who can address the security issues in the software people and organizations use today. “Why Software Security Debt Is Becoming a Serious Problem for Developers” states:
Over 70% of organizations have software containing flaws that have remained unfixed for longer than a year, constituting security debt,
Plus, the article asserts:
46% of organizations were found to have persistent, high-severity flaws that went unaddressed for over a year
Security issues exist. But the question is, “Who will address these flaws, gaps, and mistakes?”
The article cites an expert who opines:
“The further that you shift [security testing] to the developer’s desktop and have them see it as early as possible so they can fix it, the better, because number one it’s going to help them understand the issue more and [number two] it’s going to build the habits around avoiding it.”
But who is going to fix the security problems?
In-house developers may not have the expertise or access to the uncompiled code to identify and remediate. Open source and other third-party software can change without notice because why not do what’s best for those people or the bad actors manipulating open source software and “approved” apps available from a large technology company’s online store.
The article offers a number of suggestions, but none of these strike me as practical for some or most organizations.
Here’s the problem: Security is not a priority until a problem surfaces. Then when a problem becomes known, the delay between compromise, discovery, and public announcement can be — let’s be gentle — significant. Once a cyber security vendor “discovers” the problem or learns about it from a customer who calls and asks, “What has happened?”, the PR machines grind into action.
The “fixes” are typically rush jobs for these reasons:
- The vendor and the developer who made the zero a one does not earn money by fixing old code. Another factor is that the person or team responsible for the misstep is long gone, working as an Uber driver, or sitting in a rocking chair in a warehouse for the elderly
- The complexity of “going back” and making a fix may create other problems. These dependencies are unknown, so a fix just creates more problems. Writing a shim or wrapper code may be good enough to get the angry dogs to calm down and stop barking.
- The security flaw may be unfixable; that is, the original approach includes and may need flaws for performance, expediency, or some quite revenue-centric reason. No one wants to rebuild a Pinto that explodes in a rear end collision. Let the lawyers deal with it. When it comes to code, lawyers are definitely equipped to resolve security problems.
The write up contains a number of statistics, but it makes one major point:
Security debt is mounting.
Like a young worker who lives by moving credit card debt from vendor to vendor, getting out of the debt hole may be almost impossible. But, hey, it is that individual’s responsibility, not the system. Just be responsible. That is easy to say, and it strikes me as somewhat hollow.
Stephen E Arnold, February 15, 2024
Developers, AI Will Not Take Your Jobs… Yet
February 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
It seems programmers are safe from an imminent AI jobs takeover. The competent ones, anyway. LeadDev reports, “Researchers Say Generative AI Isn’t Replacing Devs Any Time Soon.” Generative AI tools have begun to lend developers a helping hand, but nearly half of developers are concerned they might loose their jobs to their algorithmic assistants.
Another MSFT Copilot completely original Bing thing. Good enough but that fellow sure looks familiar.
However, a recent study by researchers from Princeton University and the University of Chicago suggests they have nothing to worry about: AI systems are far from good enough at programming tasks to replace humans. Writer Chris Stokel-Walker tells us the researchers:
“… developed an evaluation framework that drew nearly 2,300 common software engineering problems from real GitHub issues – typically a bug report or feature request – and corresponding pull requests across 12 popular Python repositories to test the performance of various large language models (LLMs). Researchers provided the LLMs with both the issue and the repo code, and tasked the model with producing a workable fix, which was tested after to ensure it was correct. But only 4% of the time did the LLM generate a solution that worked.”
Researcher Carlos Jimenez notes these problems are very different from those LLMs are usually trained on. Specifically, the article states:
“The SWE-bench evaluation framework tested the model’s ability to understand and coordinate changes across multiple functions, classes, and files simultaneously. It required the models to interact with various execution environments, process context, and perform complex reasoning. These tasks go far beyond the simple prompts engineers have found success using to date, such as translating a line of code from one language to another. In short: it more accurately represented the kind of complex work that engineers have to do in their day-to-day jobs.”
Will AI someday be able to perform that sort of work? Perhaps, but the researchers consider it more likely we will never find AI coding independently. Instead, we will continue to need human developers to oversee algorithms’ work. They will, however, continue to make programmers’ jobs easier. If Jimenez and company are correct, developers everywhere can breathe a sigh of relief.
Cynthia Murrell, February 15, 2024
A Xoogler Explains AI, News, Inevitability, and Real Business Life
February 13, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an essay providing a tiny bit of evidence that one can take the Googler out of the Google, but that Xoogler still retains some Googley DNA. The item appeared in the Bezos bulldozer’s estimable publication with the title “The Real Wolf Menacing the News Business? AI.” Absolutely. Obviously. Who does not understand that?
A high-technology sophist explains the facts of life to a group of listeners who are skeptical about artificial intelligence. The illustration was generated after three tries by Google’s own smart software. I love the miniature horse and the less-than-flattering representation of a sales professional. That individual looks like one who would be more comfortable eating the listeners than convincing them about AI’s value.
The essay contains a number of interesting points. I want to highlight three and then, as I quite enjoy doing, I will offer some observations.
The author is a Xoogler who served from 2017 to 2023 as the senior director of news ecosystem products. I quite like the idea of a “news ecosystem.” But ecosystems as some who follow the impact of man on environments can be destroyed or pushed to the edge of catastrophe. In the aftermath of devastation coming from indifferent decision makers, greed fueled entrepreneurs, or rhinoceros poachers, landscapes are often transformed.
First, the essay writer argues:
The news publishing industry has always reviled new technology, whether it was radio or television, the internet or, now, generative artificial intelligence.
I love the word “revile.” It suggests that ignorant individuals are unable to grasp the value of certain technologies. I also like the very clever use of the word “always.” Categorical affirmatives make the world of zeros and one so delightfully absolute. We’re off to a good start I think.
Second, we have a remarkable argument which invokes another zero and one type of thinking. Consider this passage:
The publishers’ complaints were premised on the idea that web platforms such as Google and Facebook were stealing from them by posting — or even allowing publishers to post — headlines and blurbs linking to their stories. This was always a silly complaint because of a universal truism of the internet: Everybody wants traffic!
I love those universal truisms. I think some at Google honestly believe that their insights, perceptions, and beliefs are the One True Path Forward. Confidence is good, but the implication that a universal truism exists strikes me as information about a psychological and intellectual aberration. Consider this truism offered by my uneducated great grandmother:
Always get a second opinion.
My great grandmother used the logically troublesome word “always.” But the idea seems reasonable, but the action may not be possible. Does Google get second opinions when it decides to kill one of its services, modify algorithms in its ad brokering system, or reorganize its contentious smart software units? “Always” opens the door to many issues.
Publishers (I assume “all” publishers)k want traffic. May I demonstrate the frailty of the Xoogler’s argument. I publish a blog called Beyond Search. I have done this since 2008. I do not care if I get traffic or not. My goal was and remains to present commentary about the antics of high-technology companies and related subjects. Why do I do this? First, I want to make sure that my views about such topics as Google search exist. Second, I have set up my estate so the content will remain online long after I am gone. I am a publisher, and I don’t want traffic, or at least the type of traffic that Google provides. One exception causes an argument like the Xoogler’s to be shown as false, even if it is self-serving.
Third, the essay points its self-righteous finger at “regulators.” The essay suggests that elected officials pursued “illegitimate complaints” from publishers. I noted this passage:
Prior to these laws, no one ever asked permission to link to a website or paid to do so. Quite the contrary, if anyone got paid, it was the party doing the linking. Why? Because everybody wants traffic! After all, this is why advertising businesses — publishers and platforms alike — can exist in the first place. They offer distribution to advertisers, and the advertisers pay them because distribution is valuable and seldom free.
Repetition is okay, but I am able to recall one of the key arguments in this Xoogler’s write up: “Everybody wants traffic.” Since it is false, I am not sure the essay’s argumentative trajectory is on the track of logic.
Now we come to the guts of the essay: Artificial intelligence. What’s interesting is that AI magnetically pulls regulators back to the casino. Smart software companies face techno-feudalists in a high-stakes game. I noted this passage about anchoring statements via verification and just training algorithms:
The courts might or might not find this distinction between training and grounding compelling. If they don’t, Congress must step in. By legislating copyright protection for content used by AI for grounding purposes, Congress has an opportunity to create a copyright framework that achieves many competing social goals. It would permit continued innovation in artificial intelligence via the training and testing of LLMs; it would require licensing of content that AI applications use to verify their statements or look up new facts; and those licensing payments would financially sustain and incentivize the news media’s most important work — the discovery and verification of new information — rather than forcing the tech industry to make blanket payments for rewrites of what is already long known.
Who owns the casino? At this time, I would suggest that lobbyists and certain non-governmental entities exert considerable influence over some elected and appointed officials. Furthermore, some AI firms are moving as quickly as reasonably possible to convert interest in AI into revenue streams with moats. The idea is that if regulations curtail AI companies, consumers would not be well served. No 20-something wants to read a newspaper. That individual wants convenience and, of course, advertising.
Now several observations:
- The Xoogler author believes in AI going fast. The technology serves users / customers what they want. The downsides are bleats and shrieks from an outmoded sector; that is, those engaged in news
- The logic of the technologist is not the logic of a person who prefers nuances. The broad statements are false to me, for example. But to the Xoogler, these are self-evident truths. Get with our program or get left to sleep on cardboard in the street.
- The schism smart software creates is palpable. On one hand, there are those who “get it.” On the other hand, there are those who fight a meaningless battle with the inevitable. There’s only one problem: Technology is not delivering better, faster, or cheaper social fabrics. Technology seems to have some downsides. Just ask a journalist trying to survive on YouTube earnings.
Net net: The attitude of the Xoogler suggests that one cannot shake the sense of being right, entitlement, and logic associated with a Googler even after leaving the firm. The essay makes me uncomfortable for two reasons: [1] I think the author means exactly what is expressed in the essay. News is going to be different. Get with the program or lose big time. And [2] the attitude is one which I find destructive because technology is assumed to “do good.” I am not too sure about that because the benefits of AI are not known and neither are AI’s downsides. Plus, there’s the “everybody wants traffic.” Monopolistic vendors of online ads want me to believe that obvious statement is ground truth. Sorry. I don’t.
Stephen E Arnold, February 13, 2024
Sam AI-Man Puts a Price on AI Domination
February 13, 2024
AI start ups may want to amp up their fund raising. Optimism and confidence are often perceived as positive attributes. As a dinobaby, I think in terms of finding a deal at the discount supermarket. Sam AI-Man (actually Sam Altman) thinks big. Forget the $5 million investment in a semi-plausible AI play. “Think a bit bigger” is the catchphrase for OpenAI.
Thinking billions? You silly goose. Think trillions. Thanks, MidJourney. Close enough, close enough.
How does seven followed by 12 zeros strike you? A reasonable figure. Well, Mr. AI-Man estimates that’s the cost of building world AI dominating chips, content, and assorted impedimenta in a quest to win the AI dust ups in assorted global markets. “OpenAI Chief Sam Altman Is Seeking Up to $7 TRILLION (sic) from Investors Including the UAE for Secretive Project to Reshape the Global Semiconductor Industry” reports:
Altman is reportedly looking to solve some of the biggest challenges faced by the rapidly-expanding AI sector — including a shortage of the expensive computer chips needed to power large-language models like OpenAI’s ChatGPT.
And where does one locate entities with this much money? The news report says:
Altman has met with several potential investors, including SoftBank Chairman Masayoshi Son and Sheikh Tahnoun bin Zayed al Nahyan, the UAE’s head of security.
To put the figure in context, the article says:
It would be a staggering and unprecedented sum in the history of venture capital, greater than the combined current market capitalizations of Apple and Microsoft, and more than the annual GDP of Japan or Germany.
Several observations:
- The ante for big time AI has gone up
- The argument for people and content has shifted to chip facilities to fabricate semiconductors
- The fund-me tour is a newsmaker.
Net net: How about those small search-and-retrieval oriented AI companies? Heck, what about outfits like Amazon, Facebook, and Google?
Stephen E Arnold, February 13, 2024
Scattering Clouds: Price Surprises and Technical Labyrinths Have an Impact
February 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Yep, the cloud. A third-party time sharing services with some 21st-century add ons. I am not too keen on the cloud even though I am forced to use it for certain specific tasks. Others, however, think nothing of using the cloud like an invisible and infinite USB stick. “2023 Could Be the Year of Public Cloud Repatriation” strikes me as a “real” news story reporting that others are taking a look at the sky, spotting threatening clouds, and heading to a long-abandoned computer room to rethink their expenditures.
The write up reports:
Many regard repatriating data and applications back to enterprise data centers from a public cloud provider as an admission that someone made a big mistake moving the workloads to the cloud in the first place. I don’t automatically consider this a failure as much as an adjustment of hosting platforms based on current economic realities. Many cite the high cost of cloud computing as the reason for moving back to more traditional platforms.
I agree. However, there are several other factors which may reflect more managerial analysis than technical acumen; specifically:
- The cloud computing solution was better, faster, and cheaper. Better than an in house staff? Well, not for everyone because cloud companies are not working overtime to address user / customer problems. The technical personnel have other fires, floods, and earthquakes. Users / customers have to wait unless the user / customer “buys” dedicated support staff.
- So the “cheaper” argument becomes an issue. In addition to paying for escalated support, one has to deal with Byzantine pricing mechanisms. If one considers any of the major cloud providers, one can spend hours reading how to manage certain costs. Data transfer is a popular subject. Activated but unused services are another. Why is pricing so intricate and complex? Answer: Revenue for the cloud providers. Many customers are confident the big clouds are their friend and have their best financial interests at heart. That’s true. It is just that the heart is in the cloud computer books, not the user / customer balance sheets.
- And better? For certain operations, a user / customer has limited options. The current AI craze means the cloud is the principal game in town. Payroll, sales management, and Webby stuff are also popular functions to move to the cloud.
The rationale for shifting to the cloud varies, but there are some themes which my team and I have noted in our work over the years:
First, the cloud allowed “experts” who cost a lot of money to be hired by the cloud vendor. Users / customers did not have to have these expensive people on their staff. Plus, there are not that many experts who are really expert. The cloud vendor has the smarts to hire the best and the resources to pay these people accordingly… in theory. But bean counters love to cut costs so IT professionals were downsized in many organizations. The mythical “power user” could do more and gig workers could pick up any slack. But the costs of cloud computing held a little box with some Tannerite inside. Costs for information technology were going up. Wouldn’t it be cheaper to do computing in house? For some, the answer is, “Yes.”
An ostrich company with its head in the clouds, not in the sand. Thanks, MidJourney, what a not-even-good-enough illustration.
Second, most organizations lacked the expertise to manage a multi-cloud set up. When an organization has two or more clouds, one cannot allow a cloud company to manage itself and one or more competitors. Therefore, organizations had to add to their headcount a new and expensive position: A cloud manager.
Third, the cloud solutions are not homogeneous. Different rules of the road, different technical set up, and different pricing schemes. The solution? Add another position: A technical manager to manage the cloud technologies.
I will stop with these three points. One can rationalize using the cloud easily; for example a government agency can push tasks to the cloud. Some work in government agencies consists entirely of attending meetings at which third-party contractors explain what they are doing and why an engineering change order is priority number one. Who wants to do this work as part of a nine to five job?
But now there is a threat to the clouds themselves. That is security. What’s more secure? Data in a user / customer server facility down the hall or in a disused building in Piscataway, New Jersey, or sitting in a cloud service scattered wherever? Security? Cloud vendors are great at security. Yeah, how about those AWS S3 buckets or the Microsoft email “issue”?
My view is that a “where should our computing be done and where should our data reside” audit be considered by many organizations. People have had their heads in the clouds for a number of years. It is time to hold a meeting in that little-used computer room and do some thinking.
Stephen E Arnold, February 12, 2024
A Reminder: AI Winning Is Skewed to the Big Outfits
February 8, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I have been commenting about the perception some companies have that AI start ups focusing on search will eventually reduce Google’s dominance. I understand the desire to see an underdog or a coalition of underdogs overcome a formidable opponent. Hollywood loves the unknown team which wins the championship. Movie goers root for an unlikely boxing unknown to win the famous champion’s belt. These wins do occur in real life. Some Googlers favorite sporting event is the NCAA tournament. That made-for-TV series features what are called Cinderella teams. (Will Walt Disney Co. sue if the subtitles for a game employees the the word “Cinderella”? Sure, why not?)
I believe that for the next 24 to 36 months, Google will not lose its grip on search, its services, or online advertising. I admit that once one noses into 2028, more disruption will further destabilize Google. But for now, the Google is not going to be derailed unless an exogenous event ruins Googzilla’s habitat.
I want to direct attention to the essay “AI’s Massive Cash Needs Are Big Tech’s Chance to Own the Future.” The write up contains useful information about selected players in the artificial intelligence Monopoly game. I want to focus on one “worm” chart included in the essay:
Several things struck me:
- The major players are familiar; that is, Amazon, Google, Microsoft, Nvidia, and Salesforce. Notably absent are IBM, Meta, Chinese firms, Western European companies other than Mistral, and smaller outfits funded by venture capitalists relying on “open source AI solutions.”
- The five major companies in the chart are betting money on different roulette wheel numbers. VCs use the same logic by investing in a portfolio of opportunities and then pray to the MBA gods that one of these puppies pays off.
- The cross investments ensure that information leaks from the different color “worms” into the hills controlled by the big outfits. I am not using the collusion word or the intelligence word. I am just mentioned that information has a tendency to leak.
- Plumbing and associated infrastructure costs suggest that start ups may buy cloud services from the big outfits. Log files can be fascinating sources of information to the service providers engineers too.
My point is that smaller outfits are unlikely to be able to dislodge the big firms on the right side of the “worm” graph. The big outfits can, however, easily invest in, acquire, or learn from the smaller outfits listed on the left side of the graph.
Does a clever AI-infused search start up have a chance to become a big time player. Sure, but I think it is more likely that once a smaller firm demonstrates some progress in a niche like Web search, a big outfit with cash will invest, duplicate, or acquire the feisty newcomer.
That’s why I am not counting out the Google to fall over dead in the next three years. I know my viewpoint is not one shared by some Web search outfits. That’s okay. Dinobabies often have different points of view.
Stephen E Arnold, February 8, 2024
Universities and Innovation: Clever Financial Plays May Help Big Companies, Not Students
February 7, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an interesting essay in The Economist (a newspaper to boot) titled “Universities Are Failing to Boost Economic Growth.” The write up contained some facts anchored in dinobaby time; for example, “In the 1960s the research and development (R&D) unit of DuPont, a chemicals company, published more articles in the Journal of the American Chemical Society than the Massachusetts Institute of Technology and Caltech combined.”
A successful academic who exists in a revolving door between successful corporate employment and prestigious academic positions innovate with [a] a YouTube program, [b] sponsors who manufacture interesting products, and [c] taking liberties with the idea of reproducible results from his or her research. Thanks, MSFT Copilot Bing thing. Getting more invasive today, right?
I did not know that. I recall, however, that my former boss at Booz, Allen & Hamilton in the mid-1970s had me and couple of other compliant worker bees work on a project to update a big-time report about innovation. My recollection is that our interviews with universities were less productive than conversations held at a number of leading companies around the world. The gulf between university research departments had yet to morph into what were later called “technology transfer departments.” Over the years, as the Economist newspaper points out:
The golden age of the corporate lab then came to an end when competition policy loosened in the 1970s and 1980s. At the same time, growth in university research convinced many bosses that they no longer needed to spend money on their own. Today only a few firms, in big tech and pharma, offer anything comparable to the DuPonts of the past.
The shift, from my point of view, was that big companies could shift costs, outsource research, and cut themselves free from the wonky wizards that one could find wandering around the Cherry Hill Mall near the now-gone Bell Laboratories.
Thus, the schools became producers of innovation.
The Economist newspaper considers the question, “Why can’t big outfits surf on these university insights?” My question is, “Is the Economist newspaper overlooking the academic linkages that exist between the big companies producing lots of cash and a number of select universities. IBM is proud to be camped out at MIT. Google operates two research annexes at Stanford University and the University of Washington. Even smaller companies have ties; for example, Megatrends is close to Indiana University by proximity and spiritually linked to a university in a country far away. Accidents? Nope.
The Economist newspaper is doing the Oxford debate thing: From a superior position, the observations are stentorious. The knife like insights are crafted to cut those of lesser intellect down to size. Chop slice dice like a smart kitchen appliance.
I noted this passage:
Perhaps, with time, universities and the corporate sector will work together more profitably. Tighter competition policy could force businesses to behave a little more like they did in the post-war period, and beef up their internal research.
Is the Economist newspaper on the right track with this university R&D and corporate innovation arguments?
In a word, “Yep.”
Here’s my view:
- Universities teamed up with companies to get money in exchange for cheaper knowledge work subsidized by eager graduate students and PR savvy departments
- Companies used the tie ups to identify ideas with the potential for commercial application and the young at heart and generally naive students, faculty, and researchers as a recruiting short cut. (It is amazing what some PhDs would do for a mouse pad with a prized logo on it.)
- Researchers, graduate students, esteemed faculty, and probably motivated adjunct professors with some steady income after being terminated in a “real” job started making up data. (Yep, think about the bubbling scandals at Harvard University, for instance.)
- Universities embraced the idea that education is a business. Ah, those student loan plays were useful. Other outfits used the reputation to recruit students who would pay for the cost of a degree in cash. From what countries were these folks? That’s a bit of a demographic secret, isn’t it?
Where are we now? Spend some time with recent college graduates. That will answer the question, I believe. Innovation today is defined narrowly. A recent report from Google identified companies engaged in the development of mobile phone spyware. How many universities in Eastern Europe were on the Google list? Answer: Zero. How many companies and state-sponsored universities were on the list? Answer: Zero. How comprehensive was the listing of companies in Madrid, Spain? Answer: Incomplete.
I want to point out that educational institutions have quite distinct innovation fingerprints. The Economist newspaper does not recognize these differences. A small number of companies are engaged in big-time innovation while most are in the business of being cute or clever. The Economist does not pay much attention to this. The individuals, whether in an academic setting or in a corporate environment, are more than willing to make up data, surf on the work of other unacknowledged individuals, or suck of good ideas and information and then head back to a home country to enjoy a life better than some of their peers experience.
If we narrow the focus to the US, we have an unrecognized challenge — Dealing with shaped or synthetic information. In a broader context, the best instruction in certain disciplines is not in the US. One must look to other countries. In terms of successful companies, the financial rewards are shifting from innovation to me-too plays and old-fashioned monopolistic methods.
How do I know? Just ask a cashier (human, not robot) to make change without letting the cash register calculate what you will receive. Is there a fix? Sure, go for the next silver bullet solution. The method is working quite well for some. And what does “economic growth” mean? Defining terms can be helpful even to an Oxford Union influencer.
Stephen E Arnold, February 7, 2024
Education on the Cheap: No AI Required
January 26, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I don’t write about education too often. I do like to mention the plagiarizing methods of some academics. What fun! I located a true research gem (probably non-reproducible, hallucinogenic, or just synthetic but I don’t care). “Emergency-Hired Teachers Do Just as Well as Those Who Go Through Normal Training” states:
New research from Massachusetts and New Jersey suggests maybe not. In both states, teachers who entered the profession without completing the full requirements performed no worse than their normally trained peers.
A sanitation worker with a high school diploma is teaching advanced seventh graders about linear equations. The students are engaged… with their mobile phones. Hey, good enough, MSFT Copilot Bing thing. Good enough.
Then a modest question:
The better question now is why these temporary waivers aren’t being made permanent.
And what’s the write up say? I quote:
In other words, making it harder to become a teacher will reduce the supply but offers no guarantee that those who meet the bar will actually be effective in the classroom.
Huh?
Using people who did not slog through college and learned something (one hopes) is expensive. Think of the cost savings when using those who are untrained and unencumbered with expectations of big money! When good enough is the benchmark of excellence, embrace those without an comprehensive four-year or more education. Ooops. Who wants that?
I thought that I once heard that the best, most educated teaching professionals should work with the youngest students. I must have been doing some of that AI-addled thinking common among some in the old age home. When’s lunch?
Stephen E Arnold, January 26, 2024
Fujitsu: Good Enough Software, Pretty Good Swizzling
January 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The USPS is often interesting. But the UK’s postal system, however, is much worse. I think we can thank the public private US postal construct for not screwing over those who manage branch offices. Computer Weekly details how the UK postal system’s leaders knowingly had an IT problem and blamed employees: “Fujitsu Bosses Knew About Post Office Horizon IT Flaws, Says Insider.”
The UK postal system used the Post Office Horizon IT system supplied by Fujitsu. The Fujitsu bosses allowed it to be knowingly installed despite massive problems. Hundreds of UK subpostmasters were accused of fraud and false accounting. They were held liable. Many were imprisoned, had their finances ruined, and lost jobs. Many of the UK subpostmasters fought the accusations. It wasn’t until 2019 that the UK High Court proved it was Horizon IT’s fault.
The Fujitsu that “designed” the postal IT system didn’t have the correct education and experience for the project. It was built on a project that didn’t properly record and process payments. A developer on the project shared with Computer Weekly:
“‘To my knowledge, no one on the team had a computer science degree or any degree-level qualifications in the right field. They might have had lower-level qualifications or certifications, but none of them had any experience in big development projects, or knew how to do any of this stuff properly. They didn’t know how to do it.’”
The Post Office Horizon It system was the largest commercial system in Europe and it didn’t work. The software was bloated, transcribed gibberish, and was held together with the digital equivalent of Scotch tape. This case is the largest miscarriage of justice in current UK history. Thankfully the truth has come out and the subpostmasters will be compensated. The compensation doesn’t return stolen time but it will ease their current burdens.
Fujitsu is getting some scrutiny. Does the company manufacture grocery self check out stations? If so, more outstanding work.
Whitney Grace, January 25, 2024