Software Cannot Process Numbers Derived from Getty Pix, Honks Getty Legal Eagle

June 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Getty Asks London Court to Stop UK Sales of Stability AI System.” The write up comes from a service which, like Google, bandies about the word trust with considerable confidence. The main idea is that software is processing images available in the form of Web content, converting these to numbers, and using the zeros and ones to create pictures.

The write up states:

The Seattle-based company [Getty] accuses the company of breaching its copyright by using its images to “train” its Stable Diffusion system, according to the filing dated May 12, [2023].

I found this statement in the trusted write up fascinating:

Getty is seeking as-yet unspecified damages. It is also asking the High Court to order Stability AI to hand over or destroy all versions of Stable Diffusion that may infringe Getty’s intellectual property rights.

When I read this, I wonder if the scribes upon learning about the threat Gutenberg’s printing press represented were experiencing their “Getty moment.” The advanced technology of the adapted olive press and hand carved wooden letters meant that the quill pen champions had to adapt or find their future emptying garderobes (aka chamber pots).

6 3 scribes pushing press in river

Scribes prepare to throw a Gutenberg printing press and the evil innovator Gutenberg in the Rhine River. Image was produced by the evil incarnate code of MidJourney. Getty is not impressed like letters on paper with the outputs of Beelzebub-inspired innovations.

How did that rebellion against technology work out? Yeah. Disruption.

What happens if the legal system in the UK and possibly the US jump on the no innovation train? Japan’s decision points to one option: Using what’s on the Web is just fine. And China? Yep, those folks in the Middle Kingdom will definitely conform to the UK and maybe US rules and regulations. What about outposts of innovation in Armenia? Johnnies on the spot (not pot, please). But what about those computer science students at Cambridge University? Jail and fines are too good for them. To the gibbet.

Stephen E Arnold, June 6, 2023

Will McKinsey Be Replaced by AI: Missing the Point of Money and Power

May 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read a very unusual anti-big company and anti-big tech essay called “Will AI Become the New McKinsey?” The thesis of the essay in my opinion is expressed in this statement:

AI is a threat because of the way it assists capital.

The argument upon which this assertion is perched boils down to capitalism, in its present form, in today’s US of A is roached. The choices available to make life into a hard rock candy mountain world are start: Boast capitalism so that it like cancer kills everything including itself. The other alternative is to wait for the “government” to implement policies to convert the endless scroll into a post-1984 theme park.

Let’s consider McKinsey. Whether the firm likes it or not, it has become the poster child and revenue model for other services firms. Paying to turn on one’s steering wheel heating element is an example of McKinsey-type thinking. The fentanyl problem is an unintended consequence of offering some baller ideas to a few big pharma outfits in the Us. There are other examples. I prefer to focus on some intellectual characteristics which make the firm into the symbol of that which is wrong with the good old US of A; to wit:

  1. MBA think. Numbers drive decisions, not feel good ideas like togetherness, helping others, and emulating Twitch’s AI powered ask_Jesus program. If you have not seen this, check it out at this link. It has 64 viewers as I write this on May 7, 2023 at 2 pm US Eastern.
  2. Hitting goals. These are either expressed as targets to consultants or passed along by executives to the junior MBAs pushing the mill stone round and round with dot points, charts, graphs, and zippy jargon speak. The incentive plan and its goals feed the MBAs. I think of these entities as cattle with some brains.
  3. Being viewed as super smart. I know that most successful consultants know they are smart. But many smart people who work at consulting firms like McKinsey are more insecure than an 11 year old watching an Olympic gymnast flip and spin in a effortless manner. To overcome that insecurity, the MBA consultant seeks approval from his/her/its peers and from clients who eagerly pick the option the report was crafted to make a no-brainer. Yes, slaps on the back, lunch with a senior partner, and identified as a person who would undertake grinding another rail car filled with wheat.

The essay, however, overlooks a simple fact about AI and similar “it will change everything” technology.

The technology does not do anything. It is a tool. The action comes from the individuals smart enough, bold enough, and quick enough to implement or apply it first. Once the momentum is visible, then the technology is shaped, weaponized, and guided to targets. The technology does not have much of a vote. In fact, technology is the mill stone. The owner of the cattle is running the show. The write up ignores this simple fact.

One solution is to let the “government” develop policies. Another is for the technology to kill itself. Another is for those with money, courage, and brains to develop an ethical mindset. Yeah, good luck with these.

The government works for the big outfits in the good old US of A. No firm action against monopolies, right? Why? Lawyers, lobbyists, and leverage.

What’s the essay achieve? [a] Calling attention to McKinsey helps McKinsey sell. [b] Trying to gently push a lefty idea is tough when those who can’t afford an apartment in Manhattan are swiping their iPhones and posting on BlueSky. [c] Accepting the reality that technology serves those who understand and have the cash to use that technology to gain more power and money.

Ugly? Only for those excluded from the top of the social pyramid and juicy jobs at blue chip consulting firms, expertise in manipulating advanced systems and methods, and the mindset to succeed in what is the only game in town.

PS. MBAs make errors like the Bud Light promotion. That type of mistake, not opioid tactics, may be an instrument of change. But taming AI to make a better, more ethical world. That’s a comedy hook worthy of the next Sundar & Prabhakar show.

Stephen E Arnold, May 12, 2023

Researchers Break New Ground with a Turkey Baster and Zoom

April 4, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I do not recall much about my pre-school days. I do recall dropping off at different times my two children at their pre-schools. My recollections are fuzzy. I recall horrible finger paintings carried to the automobile and several times a month, mashed pieces of cake. I recall quite a bit of laughing, shouting, and jabbering about classmates whom I did not know. Truth be told I did not want to meet these progeny of highly educated, upwardly mobile parents who wore clothes with exposed logos and drove Volvo station wagons. I did not want to meet the classmates. The idea of interviewing pre-kindergarten children struck me as a waste of time and an opportunity to get chocolate Baskin & Robbins cake smeared on my suit. (I am a dinobaby, remember. Dress for success. White shirt. Conservative tie. Yada yada._

I thought (briefly, very briefly) about the essay in Science Daily titled “Preschoolers Prefer to Learn from a Competent Robot Than an Incompetent Human.” The “real news” article reported without one hint of sarcastic ironical skepticism:

We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot…

Okay. How were these data gathered? I absolutely loved the use of Zoom, a turkey baster, and nonsense terms like “fep.”

Fascinating. First, the idea of using Zoom and a turkey baster would never roamed across this dinobaby’s mind. Second, the intuitive leap by the researchers that pre-schoolers who finger-paint would like to undertake this deeply intellectual task with a robot, not a human. The human, from my experience, is necessary to prevent the delightful sprouts from eating the paint. Third, I wonder if the research team’s first year statistics professor explained the concept of a valid sample.

One thing is clear from the research. Teachers, your days are numbered unless you participate in the Singularity with Ray Kurzweil or are part of the school systems’ administrative group riding the nepotism bus.

“Fep.” A good word to describe certain types of research.

Stephen E Arnold, April 4, 2023

Subscription Thinking: More Risky Than 20-Somethings Think

March 2, 2023

I am delighted I don’t have to sit in meetings with GenX, GenY, and GenZ MBAs any longer. Now I talk to other dinobabies. Why am I not comfortable with the younger bright as a button humanoids? Here’s one reason: “Volkswagen Briefly Refused to Track Car with Abducted Child Inside until It Received Payment.”

I can visualize the group figuring out to generate revenue instead of working to explain and remediate the fuel emission scam allegedly perpetrated by Volkswagen. The reasoning probably ran along the lines, “Hey let’s charge people for monitoring a VW.” Another adds: “Wow, easy money and we avoid the blow back BMW got when it wanted money for heated seats.”

Did the VW young wizards consider downsides of the problem? Did the super bright money spinning ask, “What contingencies are needed for a legitimate law enforcement request?” My hunch is that someone mentioned these and other issues, but the team was thinking about organic pizza for lunch or why the coffee pods were plain old regular coffee.

The cited article states:

The Sheriff’s Office of Lake County, Illinois, has reported on Facebook about a car theft and child abduction incident that took place last week. Notably, it said that a Volkswagen Atlas with tracking technology built in was stolen from a woman and when the police tried asking VW to track the vehicle, it refused until it received payment.

The company floundered and then assisted. The child was unharmed.

Good work VW. Now about software in your electric vehicles and the emission engineering issue? What do I hear?

The sweet notes of Simon & Garfunkel “Sound of Silence”? So relaxing and stress free: Just like the chatter of those who were trying to rescue the child.

No, I never worry about how the snow plow driver gets to work, thank you. I worry about incomplete thinking and specious methods of getting money from a customer.

Stephen E Arnold, March 2, 2023

Why Governments and Others Outsource… Almost Everything

January 24, 2023

I read a very good essay called “Questions for a New Technology.” The core of the write up is a list of eight questions. Most of these are problems for full-time employees. Let me give you one example:

Are we clear on what new costs we are taking on with the new technology? (monitoring, training, cognitive load, etc)

The challenge strike me as the phrase “new technology.” By definition, most people in an organization will not know the details of the new technology. If a couple of people do, these individuals have to get the others up to speed. The other problem is that it is quite difficult for humans to look at a “new technology” and know about the knock on or downstream effects. A good example is the craziness of Facebook’s dating objective and how the system evolved into a mechanism for social revolution. What in-house group of workers can tackle problems like that once the method leaves the dorm room?

The other questions probe similarly difficult tasks.

But my point is that most governments do not rely on their full time employees to solve problems. Years ago I gave a lecture at Cebit about search. One person in the audience pointed out that in that individual’s EU agency, third parties were hired to analyze and help implement a solution. The same behavior popped up in Sweden, the US, and Canada and several other countries in which I worked prior to my retirement in 2013.

Three points:

  1. Full time employees recognize the impossibility of tackling fundamental questions and don’t really try
  2. The consultants retained to answer the questions or help answer the questions are not equipped to answer the questions either; they bill the client
  3. Fundamental questions are dodged by management methods like “let’s push decisions down” or “we decide in an organic manner.”

Doing homework and making informed decisions is hard. A reluctance to learn, evaluate risks, and implement in a thoughtful manner are uncomfortable for many people. The result is the dysfunction evident in airlines, government agencies, hospitals, education, and many other disciplines. Scientific research is often non reproducible. Is that a good thing? Yes, if one lacks expertise and does not want to accept responsibility.

Stephen E Arnold, January 25, 2023

Is SkyNet a Reality or a Plot Device?

January 20, 2023

We humans must resist the temptation to outsource our reasoning to an AI, no matter how trustworthy it sounds. This is because, as iai News points out, “All-Knowing Machines Are a Fantasy.” Society is now in danger of confusing fiction with reality, a mistake that could have serious consequences. Professors Emily M. Bender and Chirag Shah observe:

“Decades of science fiction have taught us that a key feature of a high-tech future is computer systems that give us instant access to seemingly limitless collections of knowledge through an interface that takes the form of a friendly (or sometimes sinisterly detached) voice. The early promise of the World Wide Web was that it might be the start of that collection of knowledge. With Meta’s Galactica, OpenAI’s ChatGPT and earlier this year LaMDA from Google, it seems like the friendly language interface is just around the corner, too. However, we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world. In fact, large language models like Galactica, ChatGPT and LaMDA are not fit for purpose as information access systems, in two fundamental and independent ways.”

The first problem is that language models do what they are built to do very well: they produce text that sounds human-generated. Authoritative, even. Listeners unconsciously ascribe human thought processes to the results. In truth, algorithms lack understanding, intent, and accountability, making them inherently unreliable as unvetted sources of information.

Next is the nature of information itself. It is impossible for an AI to tap into a comprehensive database of knowledge because such a thing does not exist and probably never will. The Web, with its contradictions, incomplete information, and downright falsehoods, certainly does not qualify. Though for some queries a quick, straightforward answer is appropriate (how many tablespoons in a cup?) most are not so simple. One must compare answers and evaluate provenance. In fact, the authors note, the very process of considering sources helps us refine our needs and context as well as asses the data itself. We miss out on all that when, in search of a quick answer, we accept the first response from any search system. That temptation is hard enough to resist with a good old fashioned Google search. The human-like interaction with chatbots just makes it more seductive. The article notes:

“Over both evolutionary time and every individual’s lived experience, natural language to-and-fro has always been with fellow human beings. As we encounter synthetic language output, it is very difficult not to extend trust in the same way as we would with a human. We argue that systems need to be very carefully designed so as not to abuse this trust.”

That is a good point, though AI developers may not be eager to oblige. It remains up to us humans to resist temptation and take the time to think for ourselves.

Cynthia Murrell, January 20, 2023

Eczema? No, Terminator Skin

January 20, 2023

Once again, yesterday’s science fiction is today’s science fact. ScienceDaily reports, “Soft Robot Detects Damage, Heals Itself.” Led by Rob Shepherd, associate professor of mechanical and aerospace engineering, Cornell University’s Organic Robotics Lab has developed stretchable fiber-optic sensors. These sensors could be incorporated in soft robots, wearable tech, and other components. We learn:

“For self-healing to work, Shepard says the key first step is that the robot must be able to identify that there is, in fact, something that needs to be fixed. To do this, researchers have pioneered a technique using fiber-optic sensors coupled with LED lights capable of detecting minute changes on the surface of the robot. These sensors are combined with a polyurethane urea elastomer that incorporates hydrogen bonds, for rapid healing, and disulfide exchanges, for strength. The resulting SHeaLDS — self-healing light guides for dynamic sensing — provides a damage-resistant soft robot that can self-heal from cuts at room temperature without any external intervention. To demonstrate the technology, the researchers installed the SHeaLDS in a soft robot resembling a four-legged starfish and equipped it with feedback control. Researchers then punctured one of its legs six times, after which the robot was then able to detect the damage and self-heal each cut in about a minute. The robot could also autonomously adapt its gait based on the damage it sensed.”

Some of us must remind ourselves these robots cannot experience pain when we read such brutal-sounding descriptions. As if to make that even more difficult, we learn this material is similar to human flesh: it can easily heal from cuts but has more trouble repairing burn or acid damage. The write-up describes the researchers’ next steps:

“Shepherd plans to integrate SHeaLDS with machine learning algorithms capable of recognizing tactile events to eventually create ‘a very enduring robot that has a self-healing skin but uses the same skin to feel its environment to be able to do more tasks.'”

Yep, sci-fi made manifest. Stay tuned.

Cynthia Murrell, January 20, 2023

MBAs Dig Up an Old Chestnut to Explain Tech Thinking

January 19, 2023

Elon Musk is not afraid to share, it is better to say tweet, about his buyout and subsequent takeover of Twitter. He has detailed how he cleared the Twitter swamp of “woke employees” and the accompanying “woke mind virus.” Musk’s actions have been described as a prime example of poor leadership skills and lauded as a return to a proper business. Musk and other rich business people see the current times as a war, but why? Vox’s article, “The 80-Year-Old Book That Explains Tech’s New Right-Wing Tilt” explains writer Antonio García Martínez:

“…who’s very plugged into the world of right-leaning Silicon Valley founders. García Martínez describes a project that looks something like reverse class warfare: the revenge of the capitalist class against uppity woke managers at their companies. ‘What Elon is doing is a revolt by entrepreneurial capital against the professional-managerial class regime that otherwise everywhere dominates (including and especially large tech companies),’ García Martínez writes. On the face of it, this seems absurd: Why would billionaires who own entire companies need to “revolt” against anything, let alone their own employees?”

García Martínez says the answer is in James Burnham’s 1941 book: The Managerial Revolution: What Is Happening In The World. Burnham wrote that the world was in late-stage capitalism, so the capitalist bigwigs would soon lose their power to the “managerial class.” These are people who direct industry and complex state operations. Burnham predicted that Nazi Germany and Soviet Russia would inevitably be the winners. He was wrong.

Burnham might have been right about the unaccountable managerial class and experts in the economy, finance, and politics declare how it is the best description of the present. Burnham said the managerial revolution would work by:

“The managerial class’s growing strength stems from two elements of the modern economy: its technical complexity and its scope. Because the tasks needed to manage the construction of something like an automobile require very specific technical knowledge, the capitalist class — the factory’s owners, in this example — can’t do everything on their own. And because these tasks need to be done at scale given the sheer size of a car company’s consumer base, its owners need to employ others to manage the people doing the technical work.

As a result, the capitalists have unintentionally made themselves irrelevant: It is the managers who control the means of production. While managers may in theory still be employed by the capitalist class, and thus subject to their orders, this is an unsustainable state of affairs: Eventually, the people who actually control the means of production will seize power from those who have it in name only.

How would this happen? Mainly, through nationalization of major industry.”

Burnham believed it was best if the government managed the economy, i.e. USSR and Nazi Germany. The authoritarian governments killed that idea, but Franklin Roosevelt laid the groundwork for an administrative state in the same vein as the New Deal.

The article explains current woke cancel culture war is viewed as a continuation of the New Deal. Managers have more important roles than the CEOs who control the money, so the CEOs are trying to maintain their relevancy and power. It could also be viewed as a societal shift towards a different work style and ethic with the old guard refusing to lay down their weapons.

Does Burnham’s novel describe Musk’s hostile and/or needed Twitter takeover? Yes and no. It depends on the perspective. It does make one wonder if big tech management are following the green light from 1651 Thomas Hobbes’ Leviathan?

Whitney Grace, January 19, 2023

Tech Needs: Programmers, Plumbing, and Prayers

January 17, 2023

A recent survey by open-source technology firm WSO2 asked 200 IT managers in Ireland and the UK about their challenges and concerns. BetaNews shares some of the results in, “IT Infrastructure Challenges Echo a Rapidly Changing Digital Landscape.” We learn of issues both short- and long-term. WSO2’s Ricardo Diniz describes the top three:

“The biggest IT challenge affecting decision-makers is ‘legacy infrastructure’. Fifty-five percent of those surveyed said it is a top challenge right now, although only 39 percent expect it to be a top challenge in three years’ time. This indicates a degree of confidence that legacy issues can be overcome, either through tools that integrate better with the legacy platforms, or the rollout of alternatives enabling legacy tech to be retired. Second on the list is ‘managing security risks’, cited by half of the respondents as a current problem, though only 41 percent expect to see it as an issue in the future. This is not surprising; given the headline-grabbing breaches and third-party risks facing organizations, resilience and protection are priorities. ‘Skills shortages in the IT team’ complete the top three challenges. It is an issue for 48 percent and is still expected to be a problem in three years’ time according to 39 percent of respondents. Notably, these three challenges are set to remain top of the list – albeit at a slightly less troublesome level – in three years’ time.”

A couple other challenges, however, seem on track to remain just as irksome in three years. One is businesses’ transition to the cloud, currently in progress. Most respondents, concerned about integrations with legacy systems and maximizing ROI, hesitate to move all their operations to the cloud and prefer a hybrid approach. Diniz recommends cloud vendors remain flexible.

The other stubborn issue is API integration and management. Though APIs are fundamental to IT infrastructure, Diniz writes, IT leaders seem unsure how to wield them effectively. As a company quite familiar with APIs, WSO2 has published some advice on the matter. Founded in 2005, WSO2 is based in Silicon Valley and maintains offices around the world.

Cynthia Murrell, January 17, 2023

Insight about Software and Its Awfulness

January 10, 2023

Software is great, isn’t it? Try to do hanging indents with numbers in Microsoft Word. If you want this function without wasting time with illogical and downright weird controls, call a Microsoft Certified Professional to code what you need. Law firms are good customers. What about figuring out which control in BlackMagic DaVinci delivers the effect you want? No problem. Hire someone who specializes in the mysteries of this sort of free software. No expert in Princeton, Illinois, or Bear Dance, Montana? Do the Zoom thing with a gig worker. That’s efficient. There are other examples; for instance, do you want to put your MP3 on an iPhone? Yeah, no problem. Just ask a 13 year old. She may do the transfer for less than an Apple Genius.

Why is software awful?

There Is No Software Maintenance” takes a step toward explaining what’s going on and what’s going to get worse. A lot worse. The write up states:

Software maintenance is simply software development.

I think this means that a minimal viable product is forever. What changes are wrappers, tweaks, and new MVP functions. Yes, that’s user friendly.

The essay reports:

The developers working on the product stay with the same product. They see how it is used, and understand how it has evolved.

My experience suggests that the mindset apparent in this article is the new normal.

The advantages are faster and cheaper, quicker revenue, and a specific view of the customer as irrelevant even if he, she, or it pays money.

The downsides? I jotted down a few which occurred to me:

  1. Changes may or may not “work”; that is, printing is killed. So what? Just fix it later.
  2. Users’ needs are secondary to what the product wizards are going to do. Oh, well, let’s take a break and not worry about today. Let’s plan for new features for tomorrow. Software is a moving target for everyone now.
  3. Assumptions about who will stick around to work on a system or software are meaningless. Staff quit, staff are RIFed, and staff are just an entity on the other end of an email with a contract working in Bulgaria or Pakistan.

What’s being lost with this attitude or mental framing? How about trust, reliability, consistency, and stability?

Stephen E Arnold, January 10, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta