Why Is a Generative System Lazy? Maybe Money and Lousy Engineering

December 13, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Great post on the Xhitter. From @ChatGPT app:

we’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it

My experience with Chat GPT is that it responds like an intern working with my team between the freshman and sophomore years at college. Most of the information output is based on a “least effort” algorithm; that is, the shortest distance between A and B is vague promises.

image

An engineer at a “smart” software company leaps into action. Thanks, MSFT Copilot. Does this cartoon look like any of your technical team?

When I read about “unpredictable”, I wonder if people realize that probabilistic systems are wrong a certain percentage of the time or outputs. The horse loses the race. Okay, a fact. The bet on that horse is a different part of the stall.

But the “lazier” comment evokes several thoughts in my dinobaby mind:

  1. Allocate less time per prompt to reduce the bottlenecks in a computationally expensive system; thus, laziness is signal about crappy engineering
  2. Recognize that recycling results for frequent queries is a great way to give a user “something” close enough for horseshoes. If the user is clever, that user will use words like “give me more” or some similar rah rah to trigger another pass through what’s available
  3. The costs of system are so great, the Sam AI-Man system is starved for cash for engineers, hardware, bandwidth, and computational capacity. Until there’s more dough, the pantry will be poorly stocked.

Net net: Lazy may be a synonym for more serious issues. How does one make AI perform? Fabrication and marketing seem to be useful.

Stephen E Arnold, December 13, 2023

Allegations That Canadian Officials Are Listening

December 13, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Widespread Use of Phone Surveillance Tools Documented in Canadian Federal Agencies

It appears a baker’s dozen of Canadian agencies are ignoring a longstanding federal directive on privacy protections. Yes, Canada. According to CBC/ Radio Canada, “Tools Capable of Extracting Personal Data from Phones Being Used by 13 Federal Departments, Documents Show.” The trend surprised even York University associate professor Evan Light, who filed the original access-to-information request. Reporter Brigitte Bureau shares:

image

Many people, it seems, are listening to Grandma’s conversations in a suburb of Calgary. (Nice weather in the winter.) Thanks, MSFT Copilot. I enjoyed the flurry of messages that you were busy creating my other image requests. Just one problemo. I had only one image request.

“Tools capable of extracting personal data from phones or computers are being used by 13 federal departments and agencies, according to contracts obtained under access to information legislation and shared with Radio-Canada. Radio-Canada has also learned those departments’ use of the tools did not undergo a privacy impact assessment as required by federal government directive. The tools in question can be used to recover and analyze data found on computers, tablets and mobile phones, including information that has been encrypted and password-protected. This can include text messages, contacts, photos and travel history. Certain software can also be used to access a user’s cloud-based data, reveal their internet search history, deleted content and social media activity. Radio-Canada has learned other departments have obtained some of these tools in the past, but say they no longer use them. … ‘I thought I would just find the usual suspects using these devices, like police, whether it’s the RCMP or [Canada Border Services Agency]. But it’s being used by a bunch of bizarre departments,’ [Light] said.

To make matters worse, none of the agencies had conducted the required Privacy Impact Assessments. A federal directive issued in 2002 and updated in 2010 required such PIAs to be filed with the Treasury Board of Canada Secretariat and the Office of the Privacy Commissioner before any new activities involving collecting or handling personal data. Light is concerned that agencies flat out ignoring the directive means digital surveillance of citizens has become normalized. Join the club, Canada.

Cynthia Murrell, December 13, 2023

Interesting Factoid about Money and Injury Reduction Payoff of Robots at Amazon

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who know if the data in “Amazon’s Humanoid Warehouse Robots Will Eventually Cost Only $3 Per Hour to Operate. That Won’t Calm Workers’ Fears of Being Replaced” are accurate. Anyone who has watch a video clip about the Musky gigapress or the Toyota auto assembly process understands one thing: Robots don’t take breaks, require vacations, or baloney promises that taking a college class will result in a promotion.

image

An unknown worker speaks with a hypothetical robot. The robot allegedly stepped on a worker named “John.” My hunch is that the firm’s PR firm will make clear that John is doing just fine. No more golf or mountain climbing but otherwise just super. Thanks MSFT Copilot. Good enough.

The headline item is the most important; that is, the idea of $3 per hour cost. That’s why automation even if the initial robots are lousy will continue apace. Once an outfit like Amazon figures out how to get “good enough” work from non-humans, it will be hasta la vista time.

However, the write up includes a statement which is fascinating in its vagueness. The context is that automation may mistake a humanoid for a box or a piece of equipment. The box is unlikely to file a law suit if the robot crushes it. The humanoid, on the other hand, will quickly surrounded by a flock of legal eagles.

Here’s the passage which either says a great deal about Amazon or about the research effort invested in the article:

And it’s still not clear whether robots will truly improve worker safety. One whistleblower report in 2020 from investigative journalism site Reveal included leaked internal data that showed that Amazon’s robotic warehouses had higher injury rates than warehouses that don’t use robots — Amazon strongly refuted the report at the time, saying that the reporter was "misinterpreting data." "Company data shows that, in 2022, recordable incident rates and lost-time incident rates were 15% and 18% lower, respectively, at Amazon Robotics sites than non-robotics sites," Amazon says on its website.

I understand the importance of the $3 per hour cost. But the major item of interest is the incidence of accidents when humanoids and robots interact in a fast-paced picking and shipping set up. The information provided about injuries is thin and warrants closer analysis in my opinion. I loved the absence of numeric context for the assertion of a “lower” injury rate. Very precise.

Stephen E Arnold, December 12, 2023

Google: Another Court Decision, Another Appeal, Rinse, Repeat

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

How long will the “loss” be tied up in courts? Answer: As long as possible.

I am going to skip the “what Google did” reports and focus on what I think is a quite useful list. The items in the list apply to Apple and Google, and I am not sure the single list is the best way to present what may be “clever” ways to dominate a market. But I will stick with what Echelon provided at this YCombinator link.

image

Two warring samurai find that everyone in the restaurant is a customer. The challenge becomes getting “more.” Thanks, MSFT Copilot. Good enough.

What does the list present? I interpreted the post as a “racket analysis.” Your mileage may vary:

Apple is horrible, but Google isn’t blameless.

Google and Apple are a duopoly that controls one of the most essential devices of our time. Their racket extends more broadly than Standard Oil. The smartphone is a critical piece of modern life, and these two companies control every aspect of them.

  • Tax 30%
  • Control when and how software can be deployed
  • Can pull software or deny updates
  • Prevent web downloads (Apple)
  • Sell ads on top of your app name or brand
  • Scare / confuse users about web downloads or app installs (Google)
  • Control the payment rails
  • Enforce using their identity and customer management (Apple)
  • Enforce using their payment rails (Apple)
  • Becoming the de-facto POS payment methods (for even more taxation)
  • Partnering with governments to be identity providers
  • Default search provider
  • Default browser
  • Prevent other browser runtimes (Apple)
  • Prevent browser tech from being comparable to native app installs (mostly Apple)
  • Unfriendly to repairs
  • Unfriendly to third party components (Apple)
  • Battery not replaceable
  • Unofficial pieces break core features due to cryptographic signing (Apple)
  • Updates obsolete old hardware
  • Green bubbles (Apple)
  • Tactics to cause FOMO in children (Apple)
  • Growth into media (movie studios, etc.) to keep eyeballs on their platforms (Apple)
  • Growth into music to keep eyeballs on their platforms

There are no other companies in the world with this level of control over such an important, cross-cutting, cross-functional essential item. If we compared the situation to auto manufacturers, there would be only two providers, you could only fuel at their gas stations, they’d charge businesses every time you visit, they’d display ads constantly, and you’d be unable to repair them without going to the provider. There need to be more than two providers. And if we can’t get more than two providers, then most of these unfair advantages need to be rolled back by regulators. This is horrific.

My team and I leave it to you to draw conclusions about the upsides and downsides of a techno feudal set up. What’s next? Appeals, hearings, trials, judgment, appeals, hearings, and trials. Change? Unlikely for now.

Stephen E Arnold, December 12, 2023

The Click Derbies: Strong Runners Take the Lead

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Two unrelated reports about user behavior strike me as important.

The first is data from Pew Research about teens and social media. Are the data “new”? The phrase about “almost constant” usage is like the decision regarding Google as a monopoly. Obvious behavior is difficult to overlook.

Teens, Social Media and Technology” reports some allegedly accurate data I find suggestive; for example:

  • 90 percent of teenagers use YouTube. There are no data about what the teens watch; for example transparent clothing, how to be healthy, or videos about 19th century philosophers
  • TikTok reaches 70 percent of teens in the 15 to 17 year old demographic. These are tomorrow’s leaders in business, technology, and medical research who will have fine tuned their attention spans to the world of short, jazzy video
  • Facebook’s share of teens is now in the 30 percent range and the “improved” Twitter are apparently losing some of their magnetic appeal.

The surprising factoids concern the 20 percent of the teens in the sample who use TikTok and YouTube “almost constantly.” The share of teens who say they are online with social media almost constantly has almost doubled in the last seven years. How much time remains to do homework? That question is not answered, but test scores suggest, “Not too much” for some teens.

image

A young and sprightly Temu is making the older runners look like losers. Thanks, MSFT Copilot. Good enough again.

The research report states:

Larger shares of Black and Hispanic teens report being on YouTube, Instagram and TikTok almost constantly, compared with a smaller share of White teens who say the same. Hispanic teens stand out in TikTok and Snapchat use. For instance, 32% of Hispanic teens say they are on TikTok almost constantly, compared with 20% of Black teens and 10% of White teens.

Social media and social media access are essentially unregulated by parents, educational institutions, and the government. Allowing teens to immerse themselves in streams of digital content may have some short term and long term downsides. Perhaps it is too late to reverse the corrosive effects of these information streams? I don’t want to be a Negative Ned, so I will say, “Of course not.”

The second report is about Temu, which allegedly has some connections to the Middle Kingdom. “Shoppers Spend Almost Twice as Long on Temu App Than Key Rivals” contains data which may or may not be spot on. Nevertheless, let’s look at what the article reports from an outfit called Apptopia:

On average, users spent 18 minutes per day on the Temu app in the second quarter, compared with 10 minutes for Amazon and 11 minutes for Alibaba Group Holding Ltd.’s AliExpress, based on Apptopia’s device-level analysis. Among younger users, the time spent on Temu was 19 minutes, it said.

Let’s assume that the data characterize one behavior: Those in the sample spend more time on the Temu app than on the Amazon service. I want to point out that comparing app usage to the undefined “Amazon” is an issue. Nevertheless, one question pops up: “Amazon, what’s causing users to spend less time on your service?” Maybe Amazon has a better interface so a person can find a product more quickly. Maybe Amazon’s crazy quilt of prices turn people off? Maybe the magical “price changes” cause individuals like me to report that bait-and-witch methods are possibly in use? Maybe people see an Amazon price for something manufactured somewhere far from Toledo, and think, “I will look elsewhere, get a better price, and ignore Toledo (a charming city).

The article points to a different reason; to wit:

The addictive app is core to the strategy. It allows users to play games to win rewards, including spinning a roulette-like wheel to win a coupon — which goes up in value if you buy something within 10 minutes. The Temu app is available in more than 40 countries, though none have taken to it like customers in the US, where it’s Apple Inc.’s top app most days this year and sales have well and truly surpassed bargain-shopping giant Shein.

I interpret this to mean: Amazon is behind the times, overly bureaucratic, reacting to AI by trying to support every AI solution, and worrying about its regulator friends in Washington and Brussels.

Net net: On one hand we have an ideal conduit to deliver weaponized information to young people. On the other, we have once-nimble US companies watching Temu score goals.

Stephen E Arnold, December 12, 2023

Redefining Elite in the Age of AI: Nope, Redefining Average Is the News Story

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Business Insider has come up with an interesting swizzle on the AI thirst fest. “AI Is the Great Equalizer.” The subtitle is quite suggestive about a technology which is over 50 years in the making and just one year into its razzle dazzle next big thing with the OpenAI generative pre-trained transformer.

image

The teacher (the person with the tie) is not quite as enthusiastic about Billy, Kristie, and Mary. The teacher knows that each is a budding Einstein, a modern day Gertrude Stein, or an Ada Lovelace in the eyes of the parent. The reality is that big-time performers are a tiny percentage of any given cohort. One blue chip consulting firm complained that it had to interview 1,000 people to identify a person who could contribute. That was self-congratulatory like Oscar Meyer slapping the Cinco Jota label on a pack of baloney. But the perceptions about the impact of a rapidly developing technology on average performers is are interesting but their validity is unknown. Thanks, MSFT Copilot, you have the parental pride angle down pat. What inspired you? A microchip?

In my opinion, the main idea in the essay is:

Education and expertise won’t count for as much as they used to.

Does this mean the falling scores for reading and math are a good thing? Just let one of the techno giants do the thinking: Is that the message.

I loved this statement about working in law firms. In my experience, the assertion applies to consulting firms as well. There is only one minor problem, which I will mention after you scan the quote:

This is something the law-school study touches on. “The legal profession has a well-known bimodal separation between ‘elite’ and ‘nonelite’ lawyers in pay and career opportunities,” the authors write. “By helping to bring up the bottom (and even potentially bring down the top), AI tools could be a significant force for equality in the practice of law.”

The write up points out that AI won’t have much of an impact on the “elite”; that is, the individuals who can think, innovate, and make stuff happen. The write up says about company hiring strategies contacted about the impact of AI:

They [These firms’ executives] are aiming to hire fewer entry-level people straight out of school, since AI can increasingly take on the straightforward, well-defined tasks these younger workers have traditionally performed. They plan to bulk up on experts who can ace the complicated stuff that’s still too hard for machines to perform.

The write up in interesting, but it is speculative, not what’s happening.

Here’s what we know about the ChatGPT-type revolution after one year:

  1. Cyber criminals have figured out how to use generative tools to increase the amount of cyber crime requiring sentences or script generation. Score one for the bad actors.
  2. Older people are either reluctant or fearful of fooling around with what appears to be “magical” software. Therefore, the uptake at work is likely to be slower and probably more cautious than for some who are younger at heart. Score one for Luddites and automation-related protests.
  3. The younger folk will use any online service that makes something easier or more convenient. Want to buy contraband? Hit those Telegram-type groups. Want to write a report about a new procedure? Hey, let a ChatGPT-type system do it? Worry about its accuracy or appropriateness? Nope, not too much.

Net net: Change is happening, but the use of smart outputs by people who cannot read, do math, or think about Kant’s ideas are unlikely to do much more than add friction to an already creaky bureaucratic machine. As for the future, I don’t know. This dinobaby is not fearful of admitting it.

As for lawyers, remember what Shakespeare said:

“The first thing we do is, let’s kill all the lawyers.”

The statement by Dick the Butcher may apply to quite a few in “knowledge” professions. Including some essayists like this dinobaby and many, many others. The rationale is to just keep the smartest ones. AI is good enough for everything else.

Stephen E Arnold, December 12, 2023

Problematic Smart Algorithms

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

We already know that AI is fundamentally biased if it is trained with bad or polluted data models. Most of these biases are unintentional due ignorance on the part of the developers, I.e. lack diversity or vetted information. In order to improve the quality of AI, developers are relying on educated humans to help shape the data models. Not all of the AI projects are looking to fix their polluted data and ZD Net says it’s going to be a huge problem: “Algorithms Soon Will Run Your Life-And Ruin It, If Trained Incorrectly.”

Our lives are saturated with technology that has incorporated AI. Everything from an application used on a smartphone to a digital assistant like Alexa or Siri uses AI. The article tells us about another type of biased data and it’s due to an ironic problem. The science team of Aparna Balagopalan, David Madras, David H. Yang, Dylan Hadfield-Menell, Gillian Hadfield, and Marzyeh Ghassemi worked worked on an AI project that studied how AI algorithms justified their predictions. The data model contained information from human respondents who provided different responses when asked to give descriptive or normative labels for data.

Normative data concentrates on hard facts while descriptive data focuses on value judgements. The team noticed the pattern so they conducted another experiment with four data sets to test different policies. The study asked the respondents to judge an apartment complex’s policy about aggressive dogs against images of canines with normative or descriptive tags. The results were astounding and scary:

"The descriptive labelers were asked to decide whether certain factual features were present or not – such as whether the dog was aggressive or unkempt. If the answer was "yes," then the rule was essentially violated — but the participants had no idea that this rule existed when weighing in and therefore weren’t aware that their answer would eject a hapless canine from the apartment.

Meanwhile, another group of normative labelers were told about the policy prohibiting aggressive dogs, and then asked to stand judgment on each image.

It turns out that humans are far less likely to label an object as a violation when aware of a rule and much more likely to register a dog as aggressive (albeit unknowingly ) when asked to label things descriptively.

The difference wasn’t by a small margin either. Descriptive labelers (those who didn’t know the apartment rule but were asked to weigh in on aggressiveness) had unwittingly condemned 20% more dogs to doggy jail than those who were asked if the same image of the pooch broke the apartment rule or not.”

The conclusion is that AI developers need to spread the word about this problem and find solutions. This could be another fear mongering tactic like the Y2K implosion. What happened with that? Nothing. Yes, this is a problem but it will probably be solved before society meets its end.

Whitney Grace, December 12, 2023

Did AI Say, Smile and Pay Despite Bankruptcy

December 11, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Going out of business is a painful event for [a] the whiz kids who dreamed up an idea guaranteed to baffle grandma, [b] the friends, family, and venture capitalists who funded the sure-fire next Google, and [c] the “customers” or more accurately the “users” who gave the product or service a whirl and some cash.

Therefore, one who had taken an entry level philosophy class when a sophomore might have brushed against the thorny bush of ethics. Some get scratched, emulate the folks who wore chains and sharpened nails under their Grieve St Laurent robes, and read medieval wisdom literature for fun. Others just dump that baloney and focus on figuring out how to exit Dodge City without a posse riding hard after them.

image

The young woman learns that the creditors of an insolvent firm may “sell” her account to companies which operate on a “pay or else” policy. Imagine. You have lousy teeth and you could be put in jail. Look at the bright side. In some nation states, prison medical services include dental work. Anesthetic? Yeah. Maybe not so much. Thanks, MSFT Copilot. You had a bit of a hiccup this morning, but you spit out a tooth with an image on it. Close enough.

I read “Smile Direct Club shuts down after Filing for Bankruptcy – What It Means for Customers.” With AI customer service solutions available, one would think that a zoom zoom semi-high tech outfit would find a way to handle issues in an elegant way. Wait! Maybe the company did, and this article documents how smart software may influence certain business decisions.

The story is simple. Smile Direct could not make its mail order dental business payoff. The cited news story presents what might be a glimpse of the AI future. I quote:

Smile Direct Club has also revealed its "lifetime smile guarantee" it previously offered was no longer valid, while those with payment plans set up are expected to continue making payments. The company has not yet revealed how customers can get refunds.

I like the idea that a “lifetime” is vague; therefore, once the company dies, the user is dead too. I enjoyed immensely the alleged expectation that customers who are using the mail order dental service — even though it is defunct and not delivering its “product” — will have to keep making payments. I assume that the friendly folks at online payment services and our friends at the big credit card companies will just keep doing the automatic billing. (Those payment institutions have super duper customer service systems in my experience. Yours, of course, may differ from mine.

I am looking forward to straightening out this story. (You know. Dental braces. Straightening teeth via mail order. High tech. The next Google. Yada yada.)

Stephen E Arnold, December 11, 2023

23andMe: Fancy Dancing at the Security Breach Ball

December 11, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Here’s a story I found amusing. Very Sillycon Valley. Very high school science clubby. Navigate to “23andMe Moves to Thwart Class-Action Lawsuits by Quietly Updating Terms.” The main point of the write up is that the firm’s security was breached. How? Probably those stupid customers or a cyber security vendor installing smart software that did not work.

image

How some influential wizards work to deflect actions hostile to their interests. In the cartoon, the Big Dog tells a young professional, “Just change the words.” Logical, right? Thanks, MSFT Copilot. Close enough for horseshoes.

The article reports:

Following a hack that potentially ensnared 6.9 million of its users, 23andMe has updated its terms of service to make it more difficult for you to take the DNA testing kit company to court, and you only have 30 days to opt out.

I have spit in a 23andMe tube. I’m good at least for this most recent example of hard-to-imagine security missteps. The article cites other publications but drives home what I think is a useful insight into the thought process of big-time Sillycon Valley firms:

customers were informed via email that “important updates were made to the Dispute Resolution and Arbitration section” on Nov. 30 “to include procedures that will encourage a prompt resolution of any disputes and to streamline arbitration proceedings where multiple similar claims are filed.” Customers have 30 days to let the site know if they disagree with the terms. If they don’t reach out via email to opt out, the company will consider their silence an agreement to the new terms.

No more neutral arbitrators, please. To make the firm’s intentions easier to understand, the cited article concludes:

The new TOS specifically calls out class-action lawsuits as prohibited. “To the fullest extent allowed by applicable law, you and we agree that each party may bring disputes against the only party only in an individual capacity, and not as a class action or collective action or class arbitration” …

I like this move for three reasons:

  1. It provides another example of the tactics certain Information Highway contractors view the Rules of the Road. In a word, “flexible.” In another word, “malleable.”
  2. The maneuver is one that seems to be — how shall I phrase it — elephantine, not dainty and subtle.
  3. The “fix” for the problem is to make the estimable company less likely to get hit with massive claims in a court. Courts, obviously, are not to be trusted in some situations.

I find the entire maneuver chuckle invoking. Am I surprised at the move? Nah. You can’t kid this dinobaby.

Stephen E Arnold, December 11, 2023

Constraints Make AI More Human. Who Would Have Guessed?

December 11, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI developers could be one step closer at artificially recreating the human brain. Science Daily discusses a study from the University of Cambridge about, “AI System Self-Organizes To Develop Features of Brains Of Complex Organisms.” Neural systems are designed to organize, form connections, and balance an organism’s competing demands. They need energy and resources to grow an organism’s physical body, while they also optimize neural activity for information processing. This natural balance describes how animal brains have similar organizational solutions.

Brains are designed to solve and understand complex problems while exerting as little energy as possible. Biological systems usually evolve to maximize energy resources available to them.

image

“See how much better the output is when we constrain the smart software,” says the young keyboard operator. Thanks, MSFT Copilot. Good enough.

Scientists from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge experimented with this concept when they made a simplified brain model and applied physical constraints. The model developed traits similar to human brains.

The scientists tested the model brain system by having it navigate a maze. Maze navigation was chosen because it requires various tasks to be completed. The different tasks activate different nodes in the model. Nodes are similar to brain neurons. The brain model needed to practice navigating the maze:

“Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.

With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.”

The physical constraints on the model forced its nodes to react and adapt similarly to a human brain. The implications for AI are that it could make algorithms process faster and more complex tasks as well as advance the evolution of “robot” brains.

Whitney Grace, December 11, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta