Microsoft Code: Works Great. Just Like Bing AI

June 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For Windows users struggling with certain apps, help is not on the way anytime soon. In fact, reports TechRadar, “Windows 11 Is So Broken that Even Microsoft Can’t Fix It.” The issues started popping up for some users of Windows 11 and Windows 10 in January and seem to coincide with damaged registry keys. For now the company’s advice sounds deceptively simple: ditch its buggy software. Not a great look. Writer Matt Hanson tells us:

“On Microsoft’s ‘Health’ webpage regarding the issue, Microsoft notes that the ‘Windows search, and Universal Windows Platform (UWP) apps might not work as expected or might have issues opening,’ and in a recent update it has provided a workaround for the problem. Not only is the lack of a definitive fix disappointing, but the workaround isn’t great, with Microsoft stating that to ‘mitigate this issue, you can uninstall apps which integrate with Windows, Microsoft Office, Microsoft Outlook or Outlook Calendar.’ Essentially, it seems like Microsoft is admitting that it’s as baffled as us by the problem, and that the only way to avoid the issue is to start uninstalling apps. That’s pretty poor, especially as Microsoft doesn’t list the apps that are causing the issue, just that they integrate with ‘Windows, Microsoft Office, Microsoft Outlook or Outlook Calendar,’ which doesn’t narrow it down at all. It’s also not a great solution for people who depend on any of the apps causing the issue, as uninstalling them may not be a viable option.”

The write-up notes Microsoft says it is still working on these issues. Will it release a fix before most users have installed competing programs or, perhaps, even a different OS? Or maybe Windows 11 snafus are just what is needed to distract people from certain issues related to the security of Microsoft’s enterprise software. Will these code faults surface (no pun intended) in Microsoft’s smart software. Of course not. Marketing makes software better.

Cynthia Murrell, June 9, 2023

Sam AI-man Begs for Regulation; China Takes Action for Structured Data LLM Integration

May 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Smart software is capturing attention from a number of countries’ researchers. The US smart software scene is crowded like productions on high school auditoria stages. Showing recently was OpenAI’s really sincere plea for regulation, oodles of new smart software applications and plug ins for browsers, and Microsoft’s assembly line of AI everywhere in Office 365. The venture capital contingent is chanting, “Who wants cash? Who wants cash?” Plus the Silicon Valley media are beside themselves with in-crowd interviews with the big Googler and breathless descriptions of how college professors fumble forward with students who may or may not let ChatGPT do that dumb essay.

5 19 ai deciders in actioin

US Silicon Valley deciders in action in public discuss the need for US companies to move slowly, carefully, judiciously when deploying AI. In private, these folks want to go as quickly as possible, lock up markets, and rake in the dough. China skips the pretending and just goes forward with certain guidelines to avoid a fun visit to a special training facility. The illustration was created by MidJourney, a service which I assume wants to be regulated at least sometimes.

In the midst of this vaudeville production, I noted “Researchers from China Propose StructGPT to Improve the Zero-Shot Reasoning Ability of LLMs over Structured Data.” On the surface, the write up seems fairly tame by Silicon Valley standards. In a nutshell, whiz kids from a university I never heard of figure out how to reformat data in a database table and make those data available to a ChatGPT type system. The idea is that ChatGPT has some useful qualities. Being highly accurate is not a core competency, however.

The good news is that the Chinese researchers have released some of their software and provided additional information on GitHub. Hopefully American researchers can take time out from nifty plug ins, begging regulators to regulate, and banking bundles of pre-default bucks in JPMorgan accounts.

For me, the article makes clear:

  1. Whatever the US does, China is unlikely to trim the jib of technologies which can generate revenue, give China an advantage, and provide some new capabilities to its military.
  2. US smart software vendors have no choice but go full speed ahead and damn the AI powered torpedoes from those unfriendly to the “move fast and break things” culture. What’s a regulator going to do? I know. Have a meeting.
  3. Smart software is many things. I perceive what can be accomplished with what I can download today and maybe some fiddling with the Renmin University of China, Beijing Key Laboratory of Big Data Management and Analysis Methods, and the University of Electronic Science and Technology of China method is a great big trampoline. Those jumping can do some amazing and hitherto unseen tricks.

Net net: Less talk and more action, please.

Stephen E Arnold, May 24, 2023

Italy Has an Interesting Idea Similar to Stromboli with Fried Flying Termites Perhaps?

April 19, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Bureaucratic thought processes are amusing, not as amusing as Google’s Paris demonstration of Bard, but darned close. I spotted one example of what seems so darned easy but may be as tough as getting 15th century Jesuits to embrace the concept of infinity. In short, mandating is different from doing.

Italy Says ChatGPT Must Allow Users to Correct Inaccurate Personal Information” reports in prose which may or may not have been written by smart software. I noted this passage about “rights”:

[such as] allowing users and non-users of ChatGPT to object to having their data processed by OpenAI and letting them correct false or inaccurate information about them generated by ChatGPT…

Does anyone recall the Google right to remove capability. The issue was blocking data, not making a determination if the information was “accurate.”

In one of my lectures at the 2023 US National Cyber Crime Conference I discuss with examples the issue of determining “accuracy.” My audience consists of government professionals who have resources to determine accuracy. I will point out that accuracy is a slippery fish.

The other issue is getting whiz bang Sillycon Valley hot stuff companies to implement reliable, stable procedures. Most of these outfits operate with Philz coffee in mind, becoming a rock star at a specialist conference, or the future owner of a next generation Italian super car. Listening to Italian bureaucrats is not a key part of their Italian thinking.

How will this play out? Hearing, legal proceedings, and then a shrug of the shoulders.

Stephen E Arnold, April 19, 2023

The Great Firewall of Florida Threatens the Chinese Culture!

April 13, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an amusing write up presented as “real news.” The story was distributed by the Associated Press and made available to its licensees / owners. The title is “Chinese Student Groups at UF condemn Banning of TikTok at Florida Universities.” Note that you will have to pay to view this article, which seems reasonable to me because I live in rural Kentucky and survive intellectually on outputs from the AP and newspapers in Florida.

The main point of the article is that Chinese students have written an essay which expresses outrage at the banning of Chinese applications. What applications? TikTok for one and a couple of messaging applications. The method for banning the applications relies on WiFi filtering and prohibiting the applications on university-owned computing devices.

The action, as I understand the write up, makes it difficult for a Chinese student to talk with relatives. Furthermore, the grousing students might lose their cultural identity.

A couple of observations:

  1. Are the Chinese students unaware and unable to work around the Great Firewall of Florida? The methods seem simple, cheap, and quick to me, but I, of course, am not in a mental tizzy about TikTok.
  2. What happens to Chinese students within the span of the nation state of China when these individuals use Facebook, Google, and other applications? My perception is that one’s social credit score drops and interesting opportunities to learn new skills in a work camp often become available?
  3. Is the old adage “A Chinese person remains Chinese regardless of where the citizen lives” no longer true? If it is true, how is one’s cultural identity impinged upon? If it is not true, what’s the big deal? Make a phone call.

Net net: The letter strikes me as little more than a propaganda effort. What disappoints me is that the AP article does not ask anyone about the possibility of a weaponized information action. The reasons:

  1. Not our job at the AP
  2. The reporter (stringer) did not think of the angle
  3. The editors did not have sufficient time to do what editors once did
  4. The extra work is too difficult and would get in the way of the Starbucks’ break.

Stephen E Arnold, April 13, 2023

PS: Why didn’t I quote from the AP story? Years ago some big wheel at the AP whose name I don’t recall told me, “You must not quote from our stories”; therefore, no quote, and my perception that an important facet of this student essay has been ignored. I wonder if ChatGPT-type software wrote the article. I am not sure that’s my job. I cannot think of this angle. My editor did not have time. Plus, the “extra” work screws up our trip to Panera. The Starbucks’ near my office is — how shall I say this — a bit like the modern news business.

Human Abstract Jobs: These May Be Goners

April 12, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the late 1960s, the idea of converting technical information to online formats lit a rocket engine under the commercial database industry. I am not going to revisit topics I have covered in this blog since 2008. The key point is that humans created the majority of the digital versions of journal papers, technical reports, and other academic content. The cost of a single abstract in 1980 was about $17 per summary. Some commercial database producers spent less (Agricola, Pharmaceutical News Index, etc.) and other spent more (Chemical Abstracts, Disclosure, etc. )

In terms of time, an efficient production process could select, create, and index an abstract in a two or three day period, assuming a non-insane, MBA efficiency freak was not allowed to fiddle with each knowledge value task making up the commercial database workflow.

That is officially. Good, bad, or indifferent, the old school approach is not possible for many reasons. The big one is the application of technology in the system. Navigate to and follow the instructions. I exactly two and one half minutes a mostly unreadable Google paper was converted into a list of dot points, a comprehensive summary, a ninth-grade reading level version, and a blog post (maybe a sixth grade reading level?) Plus the summary was indexed with a reasonable set of index terms.

You can plug in the name of the author (Jeff Dean, a Googler famous for his management acumen) and test the process on his November 2022 apologia “Efficiently Scaling Transformer Inference.” Snappy, eh?

With the authors’ abstract and the machine-generated dot points, the content of the article is easily absorbed.

Sayonara, commercial database publishers relying on human knowledge workers. Costco and WalMart are still hiring I hear. Why spend money per hour on a human demanding breaks, health care, and vacations, when software can do a job almost as good or better than an expensive bio-centric creature. Software does not take bathroom breaks which is another plus.

Stephen E Arnold, April 12, 2023

RightHub: Will It Supercharge IP Protection and Violation Trolls?

March 16, 2023

Yahoo believe it or not displayed an article I found interesting. The title was “Copy That: RightHub Wants To Be the Command Center for Intellectual Property Management.” The story originated on a Silicon Valley “real news” site called TechCrunch.

The write up explains that managing patent, trademark, and copyright information is a hassle. RightHub is, according to the story:

…something akin to what GoDaddy promises in the world of website creation, insofar as GoDaddy allows anyone to search, register, and renew domain names, with additional tools for building and hosting websites.

I am not sure that a domain-name type of model is going to have the professional, high-brow machinery that rights-sensitive outfits expect. I am not sure that many people understand that the domain-name model is fraught with manipulated expiry dates, wheeling and dealing, and possibly good old-fashioned fraud.

The idea of using a database and scripts to keep track of intellectual property is interesting. Tools are available to automate many of the discrete steps required to file, follow up, renew, and remember who did what and when.

But domain name processes as a touchstone.

Sorry. I think that the service will embrace a number of sub functions which may be of interest to some people; for example, enforcement trolls. Many are using manual or outmoded tools like decades old image recognition technology and partial Web content scanning methods. If RightHub offers a robust system, IP protection may become easier. Some trolls will be among the first to seek inspiration and possibly opportunities to be more troll-like.

Stephen E Arnold, March 16, 2023

The Confluence: Big Tech, Lobbyists, and the US Government

March 13, 2023

I read “Biden Admin’s Cloud Security Problem: It Could Take Down the Internet Like a Stack of Dominos.” I was thinking that the take down might be more like the collapses of outfits like Silicon Valley Bank.

I noted this statement about the US government, which is

embarking on the nation’s first comprehensive plan to regulate the security practices of cloud providers like Amazon, Microsoft, Google and Oracle, whose servers provide data storage and computing power for customers ranging from mom-and-pop businesses to the Pentagon and CIA.

Several observations:

  1. Lobbyists have worked to make it easy for cloud providers and big technology companies to generate revenue is an unregulated environment.
  2. Government officials have responded with inaction and spins through the revolving door. A regulator or elected official today becomes tomorrow’s technology decision maker and then back again.
  3. The companies themselves have figured out how to use their money and armies of attorneys to do what is best for the companies paying them.

What’s the consequence? Wonderful wordsmithing is one consequence. The problem is that now there are Mauna Loas burbling in different places.

Three of them are evident: The fragility of Silicon Valley approach to innovation. That’s reactive and imitative at this time. The second issue is the complexity of the three body problem resulting from lobbyists, government methods, and monopolistic behaviors. Commercial enterprises have become familiar with the practice of putting their thumbs on the scale. Who will notice?

What will happen? The possible answers are not comforting. Waving a magic wand and changing what are now institutional behaviors established over decades of handcrafting will be difficult.

I touch on a few of the consequences in an upcoming lecture for the attendees at the 2023 National Cyber Crime Conference.

Stephen E Arnold, March 13, 2023

Why Governments and Others Outsource… Almost Everything

January 24, 2023

I read a very good essay called “Questions for a New Technology.” The core of the write up is a list of eight questions. Most of these are problems for full-time employees. Let me give you one example:

Are we clear on what new costs we are taking on with the new technology? (monitoring, training, cognitive load, etc)

The challenge strike me as the phrase “new technology.” By definition, most people in an organization will not know the details of the new technology. If a couple of people do, these individuals have to get the others up to speed. The other problem is that it is quite difficult for humans to look at a “new technology” and know about the knock on or downstream effects. A good example is the craziness of Facebook’s dating objective and how the system evolved into a mechanism for social revolution. What in-house group of workers can tackle problems like that once the method leaves the dorm room?

The other questions probe similarly difficult tasks.

But my point is that most governments do not rely on their full time employees to solve problems. Years ago I gave a lecture at Cebit about search. One person in the audience pointed out that in that individual’s EU agency, third parties were hired to analyze and help implement a solution. The same behavior popped up in Sweden, the US, and Canada and several other countries in which I worked prior to my retirement in 2013.

Three points:

  1. Full time employees recognize the impossibility of tackling fundamental questions and don’t really try
  2. The consultants retained to answer the questions or help answer the questions are not equipped to answer the questions either; they bill the client
  3. Fundamental questions are dodged by management methods like “let’s push decisions down” or “we decide in an organic manner.”

Doing homework and making informed decisions is hard. A reluctance to learn, evaluate risks, and implement in a thoughtful manner are uncomfortable for many people. The result is the dysfunction evident in airlines, government agencies, hospitals, education, and many other disciplines. Scientific research is often non reproducible. Is that a good thing? Yes, if one lacks expertise and does not want to accept responsibility.

Stephen E Arnold, January 25, 2023

College Student Builds App To Detect AI Written Essays: Will It Work? Sure

January 19, 2023

Artists are worried that AI algorithms will steal their jobs, but now writers are in the same boat because the same thing is happening to them! AI are now competent enough to write coherent text. Algorithms can now write simple conversations, short movie scripts, flash fiction, and even assist in the writing process. Students are also excited about the prospect of AI writing algorithms, because it means they can finally outsource their homework to computers. Or they could have done that until someone was genius enough to design an AI that detected AI-generated essays. Business Insider reports on how a college student is now the bane of the global student body: “A Princeton Student Built An App Which Can Detect If ChatGPT Wrote An Essay To Combat AI-Based Plagiarism.”

Princeton computer science major Edward Tian spent his winter holiday designing an algorithm to detect if an essay was written by the new AI writer ChatGPT. Dubbed GPTZero, Tian’s AI can correctly identify what is written by a human and what is not. GPTZero works by rating text on how perplexity, complex, and random it is written. GPTZero proved to be very popular and it crashed soon after its release. The app is now in a beta phase that people can sign-up for or they can use on Tian’s Streamlit page.

Tian’s desire to prevent AI plagiarism motivated him to design GPTZero:

“Tian, a former data journalist with the BBC, said that he was motivated to build GPTZero after seeing increased instances of AI plagiarism. ‘Are high school teachers going to want students using ChatGPT to write their history essays? Likely not,’ he tweeted.”

AI writing algorithms are still in their infancy like art generation AI. Writers should not fear job replacement yet. Artistic AI places the arts in the same place paintings were with photography, the radio with television, and libraries with the Internet. Artistic AI will change the mediums, but portions of it will persevere and others will change. AI should be used as tools to improve the process.

Students would never find and use a work-around.

Whitney Grace, January 19, 2023

FAA Software: Good Enough?

January 11, 2023

Is today’s software good enough. For many, the answer is, “Absolutely.” I read “The FAA Grounded Every Single Domestic Flight in the U.S. While It Fixed Its Computers.” The article states what many people in affected airports knows:

The FAA alerted the public to a problem with the system at 6:29 a.m. ET on Twitter and announced that it had grounded flights at 7:19 a.m. ET. While the agency didn’t provide details on what had gone wrong with the system, known as NOTAM, Reuters reported that it had apparently stopped processing updated information. As explained by the FAA, pilots use the NOTAM system before they take off to learn about “closed runways, equipment outages, and other potential hazards along a flight route or at a location that could affect the flight.” As of 8:05 a.m. ET, there were 3,578 delays within, out, and into the U.S., according to flight-tracking website FlightAware.

NOTAM, for those not into government speak, means “Notice to Air Missions.”

Let’s go back in history. In the 1990s I think I was on the Board of the National Technical Information Service. One of our meetings was in a facility shared with the FAA. I wanted to move my rental car from the direct sunlight to a portion of the parking lot which would be shaded. I left the NTIS meeting, moved my vehicle, and entered through a side door. Guess what? I still remember my surprise when I was not asked for my admission key card. The door just opened and I was in an area which housed some FAA computer systems. I opened one of those doors and poked my nose in and saw no one. I shut the door, made sure it was locked, and returned to the NTIS meeting.

I recall thinking, “I hope these folks do software better than they do security.”

Today’s (January 11, 2023) FAA story reminded me that security procedures provide a glimpse to such technical aspects of a government agency as software. I had an engagement for the blue chip consulting firm for which I worked in the 1970s and early 1980s to observe air traffic control procedures and systems at one of the busy US airports. I noticed that incoming aircraft were monitored by printing out tail numbers and details of the flight, using a rubber band to affix these data to wooden blocks which were stacked in a holder on the air traffic control tower’s wall. A controlled knew the next flight to handle by taking the bottom most block, using the data, and putting the unused block back in a box on a table near the bowl of antacid tablets.

I recall that discussions were held about upgrading certain US government systems; for example, the IRS and the FAA computer systems. I am not sure if these systems were upgraded. My hunch is that legacy machines are still chugging along in facilities which hopefully are more secure than the door to the building referenced above.

My point is that “good enough” or “close enough for government work” is not a new concept. Many administrations have tried to address legacy systems and their propensity to [a] fail like the Social Security Agency’s mainframe to Web system, [b] not work as advertised; that is, output data that just doesn’t jibe with other records of certain activities (sorry, I am not comfortable naming that agency), or [c] are unstable because either funds for training staff, money for qualified contractors, or investments in infrastructure to keep the as is systems working in an acceptable manner.

I think someone other than a 78 year old should be thinking about the issue of technology infrastructure that, like Southwest Airlines’ systems, or the FAA’s system does not fail.

Why are these core systems failing? Here’s my list of thoughts. Note: Some of these will make anyone between 45 and 23 unhappy. Here goes:

  1. The people running agencies and their technology units don’t know what to do
  2. The consultants hired to do the work agency personnel should do don’t deliver top quality work. The objective may be a scope change or a new contract, not a healthy system
  3. The programmers don’t know what to do with IBM-type mainframe systems or other legacy hardware. These are not zippy mobile phones which run apps. These are specialized systems whose quirks and characteristics often have to be learned with hands on interaction. YouTube videos or a TikTok instructional video won’t do the job.

Net net: Failures are baked into commercial and government systems. The simultaneous of several core systems will generate more than annoyed airline passengers. Time to shift from “good enough” to “do the job right the first time”. See. I told you I would annoy some people with my observations. Well, reality is different from thinking about smart software will write itself.

Stephen E Arnold, January 11, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta