Pragmatism or the Normalization of Good Enough
November 14, 2024
Sorry to disappoint you, but this blog post is written by a dumb humanoid. The art? We used MidJourney.
I recall that some teacher told me that the Mona Lisa painter fooled around more with his paintings than he did with his assistants. True or false? I don’t know. I do know that when I wandered into the Louvre in late 2024, there were people emulating sardines. These individuals wanted a glimpse of good old Mona.
Is Hamster Kombat the 2024 incarnation of the Mona Lisa? I think this image is part of the Telegram eGame’s advertising. Nice art. Definitely a keeper for the swipers of the future.
I read “Methodology Is Bullsh&t: Principles for Product Velocity.” The main idea, in my opinion, is do stuff fast and adapt. I think this is similar to the go-go mentality of whatever genius said, “Go fast. Break things.” This version of the Truth says:
All else being equal, there’s usually a trade-off between speed and quality. For the most part, doing something faster usually requires a bit of compromise. There’s a corner getting cut somewhere. But all else need not be equal. We can often eliminate requirements … and just do less stuff. With sufficiently limited scope, it’s usually feasible to build something quickly and to a high standard of quality. Most companies assign requirements, assert a deadline, and treat quality as an output. We tend to do the opposite. Given a standard of quality, what can we ship in 60 days? Recent escapades notwithstanding, Elon Musk has a similar thought process here. Before anything else, an engineer should make the requirements less dumb.
Would the approach work for the Mona Lisa dude or for Albert Einstein? I think Al fumbled along for years, asking people to help with certain mathy issues, and worrying about how he saw a moving train relative to one parked at the station.
I think the idea in the essay is the 2024 view of a practical way to get a product or service before prospects. The benefits of redefining “fast” in terms of a specification trimmed to the MVP or minimum viable product makes sense to TikTok scrollers and venture partners trying to find a pony to ride at a crowded kids’ party.
One of the touchstones in the essay, in my opinion, is this statements:
Our customers are engineers, so we generally expect that our engineers can handle product, design, and all the rest. We don’t need to have a whole committee weighing in. We just make things and see whether people like them.
I urge you to read the complete original essay.
Several observations:
- Some people like the Mona List dude are engaged in a process of discovery, not shipping something good enough. Discovery takes some people time, lots of time. What happens during this process is part of the process of expanding an information base.
- The go-go approach has interesting consequences; for example, based on the anecdotal and flawed survey data, young users of social media evidence a number of interesting behaviors. The idea of “let ‘er rip” appears to have some impact on young people. Perhaps you have one hand experience with this problem? I know people whose children have manifested quite remarkable behaviors. I do know that certain basic mental functions like concentrating is visible to me every time I have a teenager check me out at the grocery store.
- By redefining excellence and quality, the notion of a high-value goal drops down a bit. Some new automobiles don’t work too well; for example, the Tesla Cybertruck owner whose vehicle was not able to leave the dealer’s lot.
Net net: Is a Telegram mini app Hamster Kombat today’s equivalent of the Mona Lisa?
Stephen E Arnold, November 14, 2024
Bring Back Bell Labs…Wait, Google Did…
November 12, 2024
Bell Labs was once a magical, inventing wonderland and it established the foundation for modern communication, including the Internet. Everything was great at Bell Labs until projects got deadlines and creativity was stifled. Hackaday examines the history of the mythical place and discusses if there could ever be a new Bell Labs in, “What Would It Take To Recreate Bell Labs?”
Bell Labs employees were allowed to tinker on their projects for years as long as they focused on something to benefit the larger company. These fields ranges from metallurgy, optics, semiconductors, and more. Bell Labs worked with Western Electric and AT&T. These partnerships resulted in transistor, laser, photovoltaic cell, charge-coupled cell (CCD), Unix operating system, and more.
What made Bell Labs special was that inventors were allowed to let their creativity marinate and explore their ideas. This came to screeching halt in 1982 when the US courts ordered AT&T to breakup. Western Electric became Lucent Technologies and took Bell Labs with it. The creativity and gift of time disappeared too. Could Bell Labs exist today? No, not as it was. It would need to be updated:
The short answer to the original question of whether Bell Labs could be recreated today is thus a likely ‘no’, while the long answer would be ‘No, but we can create a Bell Labs suitable for today’s technology landscape’. Ultimately the idea of giving researchers leeway to tinker is one that is not only likely to get big returns, but passionate researchers will go out of their way to circumvent the system to work on this one thing that they are interested in.”
Google did have a new incarnation of Bell Labs. Did Google invent the Google Glass and billions in revenue from actions explained in the novel 1984?
Whitney Grace, November 12, 2024
Boring Technology Ruins Innovation: Go, Chaos!
October 25, 2024
Jonathan E. Magen is an experienced computer scientist and he writes a blog called Yonkeltron. He recently posted, “Boring Tech Is Stifling Improvement.” After a brief anecdote about highway repair that wasn’t hindered because of bureaucracy and the repair crew a new material to speed up the job, Magen got to thinking about the current state of tech.
He thinks it is boring.
Magen supports tech teams being allocated budgets to adopt old technology. The montage of “don’t fix what’s not broken” comes to mind, but sometimes newer is definitely better. He relates that it is problematic if tech teams have too much technology or solution, but there’s also the problem if the one-size-fits all solution no longer works. It’s like having a document that can only be opened by Microsoft Office and you don’t have the software. It’s called a monoculture with a single point of failure. Tech nerds and philosophers have names for everything!
Magen bemoans that a boring tech environment is a buzzkill. He then shares this “happy thoughts”:
“A second negative effect is the chilling of innovation. Creating a better way of doing things definitionally requires deviation from existing practices. If that is too heavily disincentivized by “engineering standards”, then people don’t feel they have enough freedom to color outside the lines here and there. Therefore, it chills innovation in company environments where good ideas could, conceivably, come from anywhere. Put differently, use caution so as not to silence your pioneers.
Another negative effect is the potential to cause stagnation. In this case, devotion to boring tech leads to overlooking better ways of doing things. Trading actual improvement and progress for “the devil you know” seems a poor deal. One of the main arguments in favor of boring tech is operability in the polycontext composed of predictability and repairability. Despite the emergence of Site Reliability Engineering (SRE), I think that this highlights a troubling industry trope where we continually underemphasize, and underinvest in, production operations.”
Necessity is the mother of invention, but boring is the killer of innovation. Bring on chaos.
Whitney Grace, October 25, 2024
Stupidity: Under-Valued
September 27, 2024
We’re taught from a young age that being stupid is bad. The stupid kids don’t move onto higher grades and they’re ridiculed on the playground. We’re also fearful of showing our stupidity, which often goes hand and hand with ignorance. These cause embarrassment and fear, but Math For Love has a different perspective: “The Centrality Of Stupidity In Mathematics.”
Math For Love is a Web site dedicated to revolutionizing how math is taught. They have games, curriculum, and more demonstrating how beautiful and fun math is. Math is one of those subjects that makes a lot of people feel dumb, especially the higher levels. The Math For Love team referenced an essay by Martin A. Schwartz called, “The Importance Of Stupidity In Scientific Research.”
Schwartz is a microbiologist and professor at the University of Virginia. In his essay, he expounds on how modern academia makes people feel stupid.
The stupid feeling is one of inferiority. It’s problem. We’re made to believe that doctors, engineers, scientists, teachers, and other smart people never experienced any difficulty. Schwartz points out that students (and humanity) need to learn that research is extremely hard. No one starts out at the top. He also says that they need to be taught how to be productively stupid, i.e. if you don’t feel stupid then you’re not really trying.
Humans are meant to feel stupid, otherwise they wouldn’t investigate, explore, or experiment. There’s an entire era in western history about overcoming stupidity: the Enlightenment. Math For Love explains that stupidity relative for age and once a child grows they overcome certain stupidity levels aka ignorance. Kids gain the comprehension about an idea, then can apply it to life. It’s the literal meaning of the euphemism: once a mind has been stretched it can’t go back to its original size.
“I’ve come to believe that one of the best ways to address the centrality of stupidity is to take on two opposing efforts at once: you need to assure students that they are not stupid, while at the same time communicating that feeling like they are stupid is totally natural. The message isn’t that they shouldn’t be feeling stupid – that denies their honest feeling to learning the subject. The message is that of course they’re feeling stupid… that’s how everyone has to feel in order to learn math!:
Add some warm feelings to the equation and subtract self-consciousness, multiply by practice, and divide by intelligence level. That will round out stupidity and make it productive.
Whitney Grace, September 27, 2024
E2EE: Not Good Enough. So What Is Next?
May 21, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
What’s wrong with software? “
I think one !*#$ thing about the state of technology in the world today is that for so many people, their job, and therefore the thing keeping a roof over their family’s head, depends on adding features, which then incentives people to, well, add features. Not to make and maintain a good app.
Who has access to the encrypted messages? Someone. That’s why this young person is distraught as she is escorted to the police van. Thanks, MSFT Copilot. Good enough.
This statement appears in “A Rant about Phone Messaging Apps UI.” But there are some more interesting issues in messaging; specifically, E2EE or end to end encrypted messaging. The current example of talking about the wrong topic in a quite important application space is summarized in Business Insider, an estimable online publication with snappy headlines like this one: “”In the Battle of Telegram vs Signal, Elon Musk Casts Doubt on the Security of the App He Once Championed.” That write up reports as “real” news:
Signal has also made its cryptography open-source. It is widely regarded as a remarkably secure way to communicate, trusted by Jeff Bezos and Amazon executives to conduct business privately.
I want to point out that Edward Snowden “endorses” Signal. He does not use Telegram. Does he know something that others may not have tucked into their memory stack?
The Business Insider “real” news report includes this quote from a Big Dog at Signal:
“We use cryptography to keep data out of the hands of everyone but those it’s meant for (this includes protecting it from us),” Whittaker wrote. “The Signal Protocol is the gold standard in the industry for a reason–it’s been hammered and attacked for over a decade, and it continues to stand the test of time.”
Pavel Durov, the owner of Telegram, and the brother of the person like two Ph.D.’s (his brother Nikolai), suggests that Signal is insecure. Keep in mind that Mr. Durov has been the subject of some scrutiny because after telling the estimable Tucker Carlson that Telegram is about free speech. Why? Telegram blocked Ukraine’s government from using a Telegram feature to beam pro-Ukraine information into Russia. That’s a sure-fire way to make clear what country catches Mr. Durov’s attention. He did this, according to rumors reaching me from a source with links to the Ukraine, because Apple or maybe Google made him do it. Blaming the alleged US high-tech oligopolies is a good red herring and a sinky one at that.
What Telegram got to do with the complaint about “features”? In my view, Telegram has been adding features at a pace that is more rapid than Signal, WhatsApp, and a boatload of competitors. have those features created some vulnerabilities in the Telegram set up? In fact, I am not sure Telegram is a messaging platform. I also think that the company may be poised to do an end run around open sourcing its home-grown encryption method.
What does this mean? Here are a few observations:
- With governments working overtime to gain access to encrypted messages, Telegram may have to add some beef.
- Established firms and start ups are nosing into obfuscation methods that push beyond today’s encryption methods.
- Information about who is behind an E2EE messaging service is tough to obtain? What is easy to document with a Web search may be one of those “fake” or misinformation plays.
Net net: E2EE is getting long in the tooth. Something new is needed. If you want to get a glimpse of the future, catch my lecture about E2EE at the upcoming US government Cycon 2024 event in September. Want a preview? We have a briefing. Write benkent2020 at yahoo dot com for restrictions and prices.
Stephen E Arnold, May 21, 2024
Interesting Observations: Do These Apply to Technology Is a Problem Solver Thinking?
February 16, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an interesting essay by Nat Eliason, an entity unknown previously to me. “A Map Is Not a Blueprint: Why Fixing Nature Fails.” is a short collection of the way human thought processes create some quite spectacular problems. His examples include weight loss compounds like Ozempic, transfats, and the once-trendy solution to mental issues, the lobotomy.
Humans can generate a map of a “territory” or a problem space. Then humans dig in and try to make sense of their representation. The problem is that humans may approach a problem and get the solution wrong. No surprise there. One of the engines of innovation is coming up with a solution to a problem created by something incorrectly interpreted. A recent example is the befuddlement of Mark Zuckerberg when a member of the Senate committee questioning him about his company suggested that the quite wealthy entrepreneur had blood on his hands. No wonder he apologized for creating a service that has the remarkable power of bringing people closer together, well, sometimes.
Immature home economics students can apologize for a cooking disaster. Techno feudalists may have a more difficult time making amends. But there are lawyers and lobbyists ready and willing to lend a hand. Thanks, MSFT Copilot Bing thing. Good enough.
What I found interesting in Mr. Eliason’s essay was the model or mental road map humans create (consciously or unconsciously) to solve a problem. I am thinking in terms of social media, AI generated results for a peer-reviewed paper, and Web search systems which filter information to generate a pre-designed frame for certain topics.
Here’s the list of the five steps in the process creating interesting challenges for those engaged in and affected by technology today:
- Smart people see a problem, study it, and identify options for responding.
- The operations are analyzed and then boiled down to potential remediations.
- “Using our map of the process we create a solution to the problem.”
- The solution works. The downstream issues are not identified or anticipated in a thorough manner.
- New problems emerge as a consequence of having a lousy mental map of the original problem.
Interesting. Creating a solution to a technology-sparked problem without consequences may be one key to success. “I had no idea” or “I’m a sorry” makes everything better.
Stephen E Arnold, February 16, 2024
Universities and Innovation: Clever Financial Plays May Help Big Companies, Not Students
February 7, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an interesting essay in The Economist (a newspaper to boot) titled “Universities Are Failing to Boost Economic Growth.” The write up contained some facts anchored in dinobaby time; for example, “In the 1960s the research and development (R&D) unit of DuPont, a chemicals company, published more articles in the Journal of the American Chemical Society than the Massachusetts Institute of Technology and Caltech combined.”
A successful academic who exists in a revolving door between successful corporate employment and prestigious academic positions innovate with [a] a YouTube program, [b] sponsors who manufacture interesting products, and [c] taking liberties with the idea of reproducible results from his or her research. Thanks, MSFT Copilot Bing thing. Getting more invasive today, right?
I did not know that. I recall, however, that my former boss at Booz, Allen & Hamilton in the mid-1970s had me and couple of other compliant worker bees work on a project to update a big-time report about innovation. My recollection is that our interviews with universities were less productive than conversations held at a number of leading companies around the world. The gulf between university research departments had yet to morph into what were later called “technology transfer departments.” Over the years, as the Economist newspaper points out:
The golden age of the corporate lab then came to an end when competition policy loosened in the 1970s and 1980s. At the same time, growth in university research convinced many bosses that they no longer needed to spend money on their own. Today only a few firms, in big tech and pharma, offer anything comparable to the DuPonts of the past.
The shift, from my point of view, was that big companies could shift costs, outsource research, and cut themselves free from the wonky wizards that one could find wandering around the Cherry Hill Mall near the now-gone Bell Laboratories.
Thus, the schools became producers of innovation.
The Economist newspaper considers the question, “Why can’t big outfits surf on these university insights?” My question is, “Is the Economist newspaper overlooking the academic linkages that exist between the big companies producing lots of cash and a number of select universities. IBM is proud to be camped out at MIT. Google operates two research annexes at Stanford University and the University of Washington. Even smaller companies have ties; for example, Megatrends is close to Indiana University by proximity and spiritually linked to a university in a country far away. Accidents? Nope.
The Economist newspaper is doing the Oxford debate thing: From a superior position, the observations are stentorious. The knife like insights are crafted to cut those of lesser intellect down to size. Chop slice dice like a smart kitchen appliance.
I noted this passage:
Perhaps, with time, universities and the corporate sector will work together more profitably. Tighter competition policy could force businesses to behave a little more like they did in the post-war period, and beef up their internal research.
Is the Economist newspaper on the right track with this university R&D and corporate innovation arguments?
In a word, “Yep.”
Here’s my view:
- Universities teamed up with companies to get money in exchange for cheaper knowledge work subsidized by eager graduate students and PR savvy departments
- Companies used the tie ups to identify ideas with the potential for commercial application and the young at heart and generally naive students, faculty, and researchers as a recruiting short cut. (It is amazing what some PhDs would do for a mouse pad with a prized logo on it.)
- Researchers, graduate students, esteemed faculty, and probably motivated adjunct professors with some steady income after being terminated in a “real” job started making up data. (Yep, think about the bubbling scandals at Harvard University, for instance.)
- Universities embraced the idea that education is a business. Ah, those student loan plays were useful. Other outfits used the reputation to recruit students who would pay for the cost of a degree in cash. From what countries were these folks? That’s a bit of a demographic secret, isn’t it?
Where are we now? Spend some time with recent college graduates. That will answer the question, I believe. Innovation today is defined narrowly. A recent report from Google identified companies engaged in the development of mobile phone spyware. How many universities in Eastern Europe were on the Google list? Answer: Zero. How many companies and state-sponsored universities were on the list? Answer: Zero. How comprehensive was the listing of companies in Madrid, Spain? Answer: Incomplete.
I want to point out that educational institutions have quite distinct innovation fingerprints. The Economist newspaper does not recognize these differences. A small number of companies are engaged in big-time innovation while most are in the business of being cute or clever. The Economist does not pay much attention to this. The individuals, whether in an academic setting or in a corporate environment, are more than willing to make up data, surf on the work of other unacknowledged individuals, or suck of good ideas and information and then head back to a home country to enjoy a life better than some of their peers experience.
If we narrow the focus to the US, we have an unrecognized challenge — Dealing with shaped or synthetic information. In a broader context, the best instruction in certain disciplines is not in the US. One must look to other countries. In terms of successful companies, the financial rewards are shifting from innovation to me-too plays and old-fashioned monopolistic methods.
How do I know? Just ask a cashier (human, not robot) to make change without letting the cash register calculate what you will receive. Is there a fix? Sure, go for the next silver bullet solution. The method is working quite well for some. And what does “economic growth” mean? Defining terms can be helpful even to an Oxford Union influencer.
Stephen E Arnold, February 7, 2024
Modern Poison: Models, Data, and Outputs. Worry? Nah.
January 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.
How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.
Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:
AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.
Interesting. The article noted:
Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty … Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.
Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:
"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen… And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."
If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.
Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?
Nope.
Stephen E Arnold, January 29, 2024
Scientific American Spills the Beans on Innovation
December 21, 2023
This essay is the work of a dumb dinobaby. No smart software required.
It happened! A big, mostly respected publication called Scientific American explains where the Google type outfits got their best ideas. Note: The write up “Tech Billionaires Need to Stop Trying to Make the Science Fiction They Grew Up on Real” does not talk about theft of intellectual property, doing shameless me-too products, or acquiring promising start ups to make eunuchs of potential competitors.
Instead the Scientific American story asserts:
Today’s Silicon Valley billionaires grew up reading classic American science fiction. Now they’re trying to make it come true, embodying a dangerous political outlook.
I can make these science fiction worlds a reality. I am going to see Star Wars for the seventh time. I will invent the future, says the enthusiastic wizardette in 1985. Thanks, MSFT Copilot. Know anyone at Microsoft like this young person?
The article says:
These men [the Brin-Page variants] collectively have more than half a trillion dollars to spend on their quest to realize inventions culled from the science fiction and fantasy stories that they read in their teens. But this is tremendously bad news because the past century’s science fiction and fantasy works widely come loaded with dangerous assumptions.
The essayist (a science fiction writer) explains:
We are not trying to accurately predict possible futures but to earn a living: any foresight is strictly coincidental. We recycle the existing material—and the result is influenced heavily by the biases of earlier writers and readers. The genre operates a lot like a large language model that is trained using a body of text heavily contaminated by previous LLMs; it tends to emit material like that of its predecessors. Most SF is small-c conservative insofar as it reflects the history of the field rather than trying to break ground or question received wisdom.
So what? The writer answers:
It’s a worryingly accurate summary of the situation in Silicon Valley right now: the billionaires behind the steering wheel have mistaken cautionary tales and entertainments for a road map, and we’re trapped in the passenger seat. Let’s hope there isn’t a cliff in front of us.
Is there a way to look down the runway? Sure, read more science fiction. Invent the future and tell oneself, “I am an innovator.” That may be true but of what? Right now it appears that reality is a less than enticing place. The main point is that today may be built on a fairly flimsy foundation. Hint: Don’t ask a person to make change when you pay in cash.
Stephen E Arnold, December 21, 2023
Will TikTok Go Slow in AI? Well, Sure
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The AI efforts of non-governmental organizations, government agencies, and international groups are interesting. Many resolutions, proclamations, and blog polemics, etc. have been saying, “Slow down AI. Smart software will put people out of work. Destroy humans’ ability to think. Unleash the ‘I’ll be back guy.'”
Getting those enthusiastic about smart software is a management problem. Thanks, MSFT Copilot. Good enough.
My stance in the midst of this fearmongering has been bemusement. I know that predicting the future perturbations of technology is as difficult as picking a Kentucky Derby winner and not picking a horse that will drop dead during the race. When groups issue proclamations and guidelines without an enforcement mechanism, not much is going to happen in the restraint department.
I submit as partial evidence for my bemusement the article “TikTok Owner ByteDance Joins Generative AI Frenzy with Service for Chatbot Development, Memo Says.” What seems clear, if the write up is mostly on the money, is that a company linked to China is joining “the race to offer AI model development as a service.”
Two quick points:
- Model development allows the provider to get a sneak peak at what the user of the system is trying to do. This means that information flows from customer to provider.
- The company in the “race” is one of some concern to certain governments and their representatives.
The write up says:
ByteDance, the Chinese owner of TikTok, is working on an open platform that will allow users to create their own chatbots, as the company races to catch up in generative artificial intelligence (AI) amid fierce competition that kicked off with last year’s launch of ChatGPT. The “bot development platform” will be launched as a public beta by the end of the month…
The cited article points out:
China’s most valuable unicorn has been known for using some form of AI behind the scenes from day one. Its recommendation algorithms are considered the “secret sauce” behind TikTok’s success. Now it is jumping into an emerging market for offering large language models (LLMs) as a service.
What other countries are beavering away on smart software? Will these drive in the slow lane or the fast lane?
Stephen E Arnold, December 7, 2023