Call 9-1-1. AI Will Say Hello Soon

June 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My informal research suggests that every intelware and policeware vendor is working to infuse artificial intelligence or in my lingo “smart software” into their products and services. Most of these firms are not Chatty Cathies. The information about innovations is dribbled out in talks at restricted attendance events or in talks given at these events. This means that information does not zip around like the posts on the increasingly less use Twitter service #osint.

6 17 govt lunch

Government officials talking about smart software which could reduce costs but the current budget does not allow its licensing. Furthermore, time is required to rethink what to do with the humanoids who will be rendered surplus and ripe for RIF’ing. One of the attendees wisely asks, “Does anyone want dessert?” A wag of the dinobaby’s tail to MidJourney which has generated an original illustration unrelated to any content object upon which the system inadvertently fed. Smart software has to gobble lunch just like government officials.

However, once in a while, some information becomes public and “real news” outfits recognize the value of the information and make useful factoids available. That’s what happened in “A.I. Call Taker Will Begin Taking Over Police Non-Emergency Phone Lines Next Week: Artificial Intelligence Is Kind of a Scary Word for Us,” Admits Dispatch Director.”

Let me highlight a couple of statements in the cited article.

First, I circled this statement about Portland, Oregon’s new smart system:

A automated attendant will answer the phone on nonemergency and based on the answers using artificial intelligence—and that’s kind of a scary word for us at times—will determine if that caller needs to speak to an actual call taker,” BOEC director Bob Cozzie told city commissioners yesterday.

I found this interesting and suggestive of how some government professionals will view the smart software-infused system.

Second, I underlined this passage:

The new AI system was one of several new initiatives that were either announced or proposed at yesterday’s 90-minute city “work session” where commissioners grilled officials and consultants about potential ways to address the crisis.

The “crisis”, as I understand it, boils down to staffing and budgets.

Several observations:

  1. The write up makes a cautious approach to smart software. What will this mean for adoption of even more sophisticated services included in intelware and policeware solutions?
  2. The message I derived from the write up is that governmental entities are not sure what to do. Will this cloud of unknowing have a impact on adoption of AI-infused intelware and policeware systems?
  3. The article did not include information from the vendor? Is this fact provide information about the reporter’s research or does it suggest the vendor was not cooperative. Intelware and policeware companies are not particularly cooperative nor are some of the firms set up to respond to outside inquiries. Will those marketing decisions slow down adoption of smart software?

I will let you ponder the implications of this brief, and not particularly detailed article. I would suggest that intelware and policeware vendors put on their marketing hats and plug them into smart software. Some new hurdles for making sales may be on the horizon.

Stephen E  Arnold, June 20. 2023

Intellectual Property: What Does That Mean, Samsung?

June 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Former Samsung Executive Accused of Trying to Copy an Entire Chip Plant in China.” I have no idea if [a] the story is straight and true, [b] a disinformation post aimed at China, [c] something a “real news” type just concocted with the help of a hallucinating chunk of smart software, [d] a story emerging from a lunch meeting with “what if” ideas and “hypotheticals” were flitting from Chinese take out container to take out container.

It does not matter. I find it bold, audacious, and almost believable.

6 12 stealing documents

A single engineer’s pile of schematics, process flow diagrams, and details of third party hardware require to build a Samsung-like outfit. The illustration comes from the fertile zeros and ones at MidJourney.

The write up reports:

Prosecutors in the Suwon District have indicted a former Samsung executive for allegedly stealing semiconductor plant blueprints and technology from the leading chipmaker, BusinessKorea reports. They didn’t name the 65-year-old defendant, who also previously served as vice president of another Korean chipmaker SK Hynix, but claimed he stole the information between 2018 and 2019. The leak reportedly cost Samsung about $230 million.

Why would someone steal information to duplicate a facility which is probably getting long in the tooth? That’s a good question. Why not steal from the departments of several companies which are planning facilities to be constructed in 2025? The write up states:

The defendant allegedly planned to build a semiconductor in Xi’an, China, less than a mile from an existing Samsung plant. He hired 200 employees from SK Hynix and Samsung to obtain their trade secrets while also teaming up with an unnamed Taiwanese electronics manufacturing company that pledged $6.2 billion to build the new semiconductor plant — the partnership fell through. However, the defendant was able to secure about $358 million from Chinese investors, which he used to create prototypes in a Chengdu, China-based plant. The plant was reportedly also built using stolen Samsung information, according to prosecutors.

Three countries identified. The alleged plant would be located in easy-to-reach Xi’an. (Take a look at the nifty entrance to the walled city. Does that look like a trap to you? It did to me.)

My hunch is that there is more to this story. But it does a great job of casting shade on the Middle Kingdom. Does anyone doubt the risk posed by insiders who get frisky? I want to ask Samsung’s human resources professional about that vetting process for new hires and what happens when a dinobaby leaves the company with some wrinkles, gray hair, and information. My hunch is that the answer will be, “Not much.”

Stephen E Arnold, June 19, 2023

Trust: Some in the European Union Do Not Believe the Google. Gee, Why?

June 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Google’s Ad Tech Dominance Spurs More Antitrust Charges, Report Says.” The write up seems to say that some EU regulators do not trust the Google. Trust is a popular word at the alleged monopoly. Yep, trust is what makes Google’s smart software so darned good.

6 13 fat man

A lawyer for a high tech outfit in the ad game says, “Commissioner, thank you for the question. You can trust my client. We adhere to the highest standards of ethical behavior. We put our customers first. We are the embodiment of ethical behavior. We use advanced technology to enhance everyone’s experience with our systems.” The rotund lawyer is a confection generated by MidJourney, an example of in this case, pretty smart software.

The write up says:

These latest charges come after Google spent years battling and frequently bending to the EU on antitrust complaints. Seeming to get bigger and bigger every year, Google has faced billions in antitrust fines since 2017, following EU challenges probing Google’s search monopoly, Android licensing, Shopping integration with search, and bundling of its advertising platform with its custom search engine program.

The article makes an interesting point, almost as an afterthought:

…Google’s ad revenue has continued increasing, even as online advertising competition has become much stiffer…

The article does not ask this question, “Why is Google making more money when scrutiny and restrictions are ramping up?”

From my vantage point in the old age “home” in rural Kentucky, I certainly have zero useful data about this interesting situation, assuming that it is true of course. But, for the nonce, let’s speculate, shall we?

Possibility A: Google is a monopoly and makes money no matter what laws, rules, and policies are articulated. Game is now in extra time. Could the referee be bent?

This idea is simple. Google’s control of ad inventory, ad options, and ad channels is just a good, old-fashioned system monopoly. Maybe TikTok and Facebook offer options, but even with those channels, Google offers options. Who can resist this pitch: “Buy from us, not the Chinese. Or, buy from us, not the metaverse guy.”

Possibility B: Google advertising is addictive and maybe instinctual. Mice never learn and just repeat their behaviors.

Once there is a cheese pay off for the mouse, those mice are learning creatures and in some wild and non-reproducible experiments inherit their parents’ prior learning. Wow. Genetics dictate the use of Google advertising by people who are hard wired to be Googley.

Possibility C: Google’s home base does not regulate the company in a meaningful way.

The result is an advanced and hardened technology which is better, faster, and maybe cheaper than other options. How can the EU, with is squabbling “union”, hope to compete with what is weaponized content delivery build on a smart, adaptive global system? The answer is, “It can’t.”

Net net: After a quarter century, what’s more organized for action, a regulatory entity or the Google? I bet you know the answer, don’t you?

Stephen E Arnold, June xx, 2023

Japan and Copyright: Pragmatic and Realistic

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Japan Goes All In: Copyright Doesn’t Apply To AI Training.” In a nutshell, Japan’s alleged stance is accompanied with a message for “creators”: Tough luck.

6 1 ripping off my content

You are ripping off my content. I don’t think that is fair. I am a creator. The image of a testy office lady is the product of MidJourney’s derivative capabilities.

The write up asserts:

It seems Japan’s stance is clear – if the West uses Japanese culture for AI training, Western literary resources should also be available for Japanese AI. On a global scale, Japan’s move adds a twist to the regulation debate. Current discussions have focused on a “rogue nation” scenario where a less developed country might disregard a global framework to gain an advantage. But with Japan, we see a different dynamic. The world’s third-largest economy is saying it won’t hinder AI research and development. Plus, it’s prepared to leverage this new technology to compete directly with the West.

If this is the direction in which Japan is heading, what’s the posture in China, Viet-Nam and other countries in the region? How can the US regulate for an unknown future? We know Japan’s approach it seems.

Stephen E Arnold, June 8, 2023

OpenAI Clarifies What “Regulate” Means to the Sillycon Valley Crowd

May 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Sam AI-man begged (at least he did not get on his hands and knees) the US Congress to regulate artificial intelligence (whatever that means). I just read “Sam Altman Says OpenAI Will Leave the EU if There’s Any Real AI Regulation.” I know I am old. I know I lose my car keys a couple of times every 24 hours. I do recall Mr. AI-man wanted regulation.

However, the write up reports:

Though unlike in the AI-friendly U.S., Altman has threatened to take his big tech toys to the other end of the sandbox if they’re not willing to play by his rules.

The vibes of the Zuckster zip through my mind. Facebook just chugs along, pays fines, and mostly ignores regulators. China seems to be an exception for Facebook, the Google, and some companies I don’t know about. China had a mobile death-mobile. A person accused and convicted would be executed in the mobile death van as soon as it arrived at the location where the convicted bad actor was. Re-education camps and mobile death-mobiles suggest that some US companies choose to exit China. Lawyers who have to arrive quickly or their client has been processed are not much good in some of China’s efficient state machines. Fines, however, are okay. Write a check and move on.

Mr. AI-man is making clear that the word “regulate” means one thing to Mr. AI-man and another thing to those who are not getting with the smart software program. The write up states:

Altman said he didn’t want any regulation that restricted users’ access to the tech. He told his London audience he didn’t want anything that could harm smaller companies or the open source AI movement (as a reminder, OpenAI is decidedly more closed off as a company than it’s ever been, citing “competition”). That’s not to mention any new regulation would inherently benefit OpenAI, so when things inevitably go wrong it can point to the law to say they were doing everything they needed to do.

I think “regulate” means what the declining US fast food outfit who told me “have it your way” meant. The burger joint put in a paper bag whatever the professionals behind the counter wanted to deliver. Mr. AI-man doesn’t want any “behind the counter” decision making by a regulatory cafeteria serving up its own version of lunch.

Mr. AI-man wants “regulate” to mean his way.

In the US, it seems, that is exactly what big tech and promising venture funded outfits are going to get; that is, whatever each company wants. Competition is good. See how well OpenAI and Microsoft are competing with Facebook and Google. Regulate appears to mean “let us do what we want to do.”

I am probably wrong. OpenAI, Google, and other leaders in smart software are at this very moment consuming the Harvard Library of books to read in search of information about ethical behavior. The “moral” learning comes later.

Net net: Now I understand the new denotation of “regulate.” Governments work for US high-tech firms. Thus, I think the French term laissez-faire nails it.

Stephen E Arnold, May 25, 2023

AI Legislation: Can the US Regulate What It Does Understand Like a Dull Normal Student?

April 20, 2023

I read an essay by publishing and technology luminary Tim O’Reilly. If you don’t know the individual, you may recognize the distinctive art used on many of his books. Here’s what I call the parrot book’s cover:

image

You can get a copy at this link.

The essay to which I referred in the first sentence of this post is “You Can’t Regulate What You Don’t Understand.” The subtitle of the write up is “Or, Why AI Regulations Should Begin with Mandated Disclosures.” The idea is an interesting one.

Here’s a passage I found worth circling:

But if we are to create GAAP for AI, there is a lesson to be learned from the evolution of GAAP itself. The systems of accounting that we take for granted today and use to hold companies accountable were originally developed by medieval merchants for their own use. They were not imposed from without, but were adopted because they allowed merchants to track and manage their own trading ventures. They are universally used by businesses today for the same reason.

The idea is that those without first hand knowledge of something cannot make effective regulations.

The essay makes it clear that government regulators may be better off:

formalizing and requiring detailed disclosure about the measurement and control methods already used by those developing and operating advanced AI systems. [Emphasis in the original.]

The essay states:

Companies creating advanced AI should work together to formulate a comprehensive set of operating metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge.

The conclusion is warranted by the arguments offered in the essay:

We shouldn’t wait to regulate these systems until they have run amok. But nor should regulators overreact to AI alarmism in the press. Regulations should first focus on disclosure of current monitoring and best practices. In that way, companies, regulators, and guardians of the public interest can learn together how these systems work, how best they can be managed, and what the systemic risks really might be.

My thought is that it may be useful to look at what generalities and self-regulation deliver in real life. As examples, I would point out:

  1. The report “Independent Oversight of the Auditing Professionals: Lessons from US History.” To keep it short and sweet: Self regulation has failed. I will leave you to work through the somewhat academic argument. I have burrowed through the document and largely agree with the conclusion.
  2. The US Securities & Exchange Commission’s decision to accept $1.1 billion in penalties as a result of 16 Wall Street firms’ failure to comply with record keeping requirements.
  3. The hollowness of the points set forth in “The Role of Self-Regulation in the Cryptocurrency Industry: Where Do We Go from Here?” in the wake of the Sam Bankman Fried FTX problem.
  4. The MBA-infused “ethical compass” of outfits operating with a McKinsey-type of pivot point?

My view is that the potential payoff from pushing forward with smart software is sufficient incentive to create a Wild West, anything-goes environment. Those companies with the most to gain and the resources to win at any cost can overwhelm US government professionals with flights of legal eagles.

With innovations in smart software arriving quickly, possibly as quickly as new Web pages in the early days of the Internet, firms that don’t move quickly, act expediently, and push toward autonomous artificial intelligence will be unable to catch up with firms who move with alacrity.

Net net: No regulation, imposed or self-generated, will alter the rocket launch of news services. The US economy is not set up to encourage snail-speed innovation. The objective is met by generating money. Money, not guard rails, common sense, or actions which harm a company’s self interest, makes the system work… for some. Losers are the exhaust from an economic machine. One doesn’t drive a Model T Ford. Today those who can drive a Tesla Plaid or McLaren. The “pet” is a French bulldog, not a parrot.

Stephen E Arnold, April 20, 2023

The Confluence: Big Tech, Lobbyists, and the US Government

March 13, 2023

I read “Biden Admin’s Cloud Security Problem: It Could Take Down the Internet Like a Stack of Dominos.” I was thinking that the take down might be more like the collapses of outfits like Silicon Valley Bank.

I noted this statement about the US government, which is

embarking on the nation’s first comprehensive plan to regulate the security practices of cloud providers like Amazon, Microsoft, Google and Oracle, whose servers provide data storage and computing power for customers ranging from mom-and-pop businesses to the Pentagon and CIA.

Several observations:

  1. Lobbyists have worked to make it easy for cloud providers and big technology companies to generate revenue is an unregulated environment.
  2. Government officials have responded with inaction and spins through the revolving door. A regulator or elected official today becomes tomorrow’s technology decision maker and then back again.
  3. The companies themselves have figured out how to use their money and armies of attorneys to do what is best for the companies paying them.

What’s the consequence? Wonderful wordsmithing is one consequence. The problem is that now there are Mauna Loas burbling in different places.

Three of them are evident: The fragility of Silicon Valley approach to innovation. That’s reactive and imitative at this time. The second issue is the complexity of the three body problem resulting from lobbyists, government methods, and monopolistic behaviors. Commercial enterprises have become familiar with the practice of putting their thumbs on the scale. Who will notice?

What will happen? The possible answers are not comforting. Waving a magic wand and changing what are now institutional behaviors established over decades of handcrafting will be difficult.

I touch on a few of the consequences in an upcoming lecture for the attendees at the 2023 National Cyber Crime Conference.

Stephen E Arnold, March 13, 2023

Adulting Desperation at TikTok? More of a PR Play for Sure

March 1, 2023

TikTok is allegedly harvesting data from its users and allegedly making that data accessible to government-associated research teams in China. The story “TikTok to Set One-Hour Daily Screen Time Limit by Default for Users under 18” makes clear that TikTok is in concession mode. The write up says:

TikTok announced Wednesday that every user under 18 will soon have their accounts default to a one-hour daily screen time limit, in one of the most aggressive moves yet by a social media company to prevent teens from endlessly scrolling….

Now here’s the part I liked:

Teenage TikTok users will be able to turn off this new default setting… [emphasis added]

The TikTok PR play misses the point. Despite the yip yap about Oracle as an intermediary, the core issue is suspicion that TikTok is sucking down data. Some of the information can be cross correlated with psychological profiles. How useful would it be to know that a TikTok behavior suggests a person who may be susceptible to outside pressure, threats, or bribes. No big deal? Well, it is a big deal because some young people enlist in the US military and others take jobs at government entities. How about those youthful contractors swarming around Executive Branch agencies’ computer systems, Congressional offices, and some interesting facilities involved with maps and geospatial work?

I have talked about TikTok risks for years. Now we get a limit on usage?

Hey, that’s progress like making a square wheel out of stone.

Stephen E Arnold, March 1, 2023

Is the UK Stupid? Well, Maybe, But Government Officials Have Identified Some Targets

February 27, 2023

I live in good, old Kentucky, rural Kentucky, according to my deceased father-in-law. I am not an Anglophile. The country kicked my ancestors out in 1575 for not going with the flow. Nevertheless, I am reluctant to slap “even more stupid” on ideas generated by those who draft regulations. A number of experts get involved. Data are collected. Opinions are gathered from government sources and others. The result is a proposal to address a problem.

The write up “UK Proposes Even More Stupid Ideas for Directly Regulating the Internet, Service Providers” makes clear that governments have not been particularly successful with its most recent ideas for updating the UK’s 1990 Computer Misuse Act. The reasons offered are good; for example, reducing cyber crime and conducting investigations. The downside of the ideas is that governments make mistakes. Governmental powers creep outward over time; that is, government becomes more invasive.

The article highlights the suggested changes that the people drafting the modifications suggest:

  1. Seize domains and Internet Protocol addresses
  2. Use of contractors for this process
  3. Restrict algorithm-manufactured domain names
  4. Ability to go after the registrar and the entity registering the domain name
  5. Making these capabilities available to other government entities
  6. A court review
  7. Mandatory data retention
  8. Redefining copying data as theft
  9. Expanded investigatory activities.

I am not a lawyer, but these proposals are troubling.

I want to point out that whoever drafted the proposal is like a tracking dog with an okay nose. Based on our research for an upcoming lecture to some US government officials, it is clear that domain name registries warrant additional scrutiny. We have identified certain ISPs as active enablers of bad actors because there is no effective oversight on these commercial and sometimes non-governmental organizations or non-profit “do good” entities. We have identified transnational telecommunications and service providers who turn a blind eye to the actions of other enterprises in the “chain” which enables Internet access.

The UK proposal seems interesting and a launch point for discussion, the tracking dog has focused attention on one of the “shadow” activities enabled by lax regulators. Hopefully more scrutiny will be directed at the complicated and essentially Wild West populated by enablers of criminal activity like human trafficking, weapons sales, contraband and controlled substance marketplaces, domain name fraud, malware distribution, and similar activities.

At least a tracking dog is heading along what might be an interesting path to explore.

Stephen E Arnold, February 27, 2023

Googzilla Squeezed: Will the Beastie Wriggle Free? Can Parents Help Google Wiggle Out?

January 25, 2023

How easy was it for our prehistoric predecessors to capture a maturing reptile. I am thinking of Googzilla. (That’s my way of conceptualizing the Alphabet Google DeepMind outfit.)

image

This capturing the dangerous dinosaur shows one regulator and one ChatGPT dev in the style of Normal Rockwell (who may be spinning in his grave). The art was output by the smart software in use at Craiyon.com. I love those wonky spellings and the weird video ads and the image obscuring Next and Stay buttons. Is this the type of software the Google fears? I believe so.

On one side of the creature is the pesky ChatGPT PR tsunami. Google’s management team had to call Google’s parents to come to the garage. The whiz kids find themselves in a marketing battle. Imagine, a technology that Facebook dismisses as not a big deal, needs help. So the parents come back home from their vacations and social life to help out Sundar and Prabhakar. I wonder if the parents are asking, “What now?” and “Do you think these whiz kids want us to move in with them.” Forbes, the capitalist tool with annoying pop ups, tells one side of the story in “How ChatGPT Suddenly Became Google’s Code Red, Prompting Return of Page and Brin.

On the other side of Googzilla is a weak looking government regulator. The Wall Street Journal (January 25, 2023) published “US Sues to Split Google’s Ad Empire.” (Paywall alert!) The main idea is that after a couple of decades of Google is free, great, and gives away nice tchotchkes US Federal and state officials want the Google to morph into a tame lizard.

Several observations:

  1. I find it amusing that Google had to call its parents for help. There’s nothing like a really tough, decisive set of whiz kids
  2. The Google has some inner strengths, including lawyers, lobbyists, and friends who really like Google mouse pads, LED pins, and T shirts
  3. Users of ChatGPT may find that as poor as Google’s search results are, the burden of figuring out an “answer” falls on the user. If the user cooks up an incorrect answer, the Google is just presenting links or it used to. When the user accepts a ChatGPT output as ready to use, some unforeseen consequences may ensue; for example, getting called out for presenting incorrect or stupid information, getting sued for copyright violations, or assuming everyone is using ChatGPT so go with the flow

Net net: Capturing and getting the vet to neuter the beastie may be difficult. Even more interesting is the impact of ChatGPT on allegedly calm, mature, and seasoned managers. Yep, Code Red. “Hey, sorry to bother you. But we need your help. Right now.”

Stephen E Arnold, January 25, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta