Cyber Security Software and AI: Man and Machine Hook Up

January 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My hunch is that 2024 is going to be quite interesting with regards to cyber security. The race among policeware vendors to add “artificial intelligence” to their systems began shortly after Microsoft’s ChatGPT moment. Smart agents, predictive analytics coupled to text sources, real-time alerts from smart image monitoring systems are three application spaces getting AI boosts. The efforts are commendable if over-hyped. One high-profile firm’s online webinar presented jargon and buzzwords but zero evidence of the conviction or closure value of the smart enhancements.

image

The smart cyber security software system outputs alerts which the system manager cannot escape. Thanks, MSFT Copilot Bing thing. You produced a workable illustration without slapping my request across my face. Good enough too.

Let’s accept as a working presence that everyone from my French bulldog to my neighbor’s ex wife wants smart software to bring back the good old, pre-Covid, go-go days. Also, I stipulate that one should ignore the fact that smart software is a demonstration of how numerical recipes can output “good enough” data. Hallucinations, errors, and close-enough-for-horseshoes are part of the method. What’s the likelihood the door of a commercial aircraft would be removed from an aircraft in flight? Answer: Well, most flights don’t lose their doors. Stop worrying. Those are the rules for this essay.

Let’s look at “The I in LLM Stands for Intelligence.” I grant the title may not be the best one I have spotted this month, but here’s the main point of the article in my opinion. Writing about automated threat and security alerts, the essay opines:

When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means. The better the crap, the longer time and the more energy we have to spend on the report until we close it. A crap report does not help the project at all. It instead takes away developer time and energy from something productive. Partly because security work is consider one of the most important areas so it tends to trump almost everything else.

The idea is that strapping on some smart software can increase the outputs from a security alerting system. Instead of helping the overworked and often reviled cyber security professional, the smart software makes it more difficult to figure out what a bad actor has done. The essay includes this blunt section heading: “Detecting AI Crap.” Enough said.

The idea is that more human expertise is needed. The smart software becomes a problem, not a solution.

I want to shift attention to the managers or the employee who caused a cyber security breach. In what is another zinger of a title, let’s look at this research report, “The Immediate Victims of the Con Would Rather Act As If the Con Never Happened. Instead, They’re Mad at the Outsiders Who Showed Them That They Were Being Fooled.” Okay, this is the ostrich method. Deny stuff by burying one’s head in digital sand like TikToks.

The write up explains:

The immediate victims of the con would rather act as if the con never happened. Instead, they’re mad at the outsiders who showed them that they were being fooled.

Let’s assume the data in this “Victims” write up are accurate, verifiable, and unbiased. (Yeah, I know that is a stretch.)

What do these two articles do to influence my view that cyber security will be an interesting topic in 2024? My answers are:

  1. Smart software  will allegedly detect, alert, and warn of “issues.” The flow of “issues” may overwhelm or numb staff who must decide what’s real and what’s a fakeroo. Burdened staff can make errors, thus increasing security vulnerabilities or missing ones that are significant.
  2. Managers, like the staffer who lost a mobile phone, with company passwords in a plain text note file or an email called “passwords” will blame whoever blows the whistle. The result is the willful refusal to talk about what happened, why, and the consequences. Examples range from big libraries in the UK to can kicking hospitals in a flyover state like Kentucky.
  3. Marketers of remediation tools will have a banner year. Marketing collateral becomes a closed deal making the art history majors writing copy secure in their job at a cyber security company.

Will bad actors pay attention to smart software and the behavior of senior managers who want to protect share price or their own job? Yep. Close attention.

Stephen E Arnold, January 8, 2024

THE I IN LLM STANDS FOR INTELLIGENCE

xx

x

x

x

x

x

Is Philosophy Irrelevant to Smart Software? Think Before Answering, Please

January 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I listened to Lex Fridman’s interview with the founder of Extropic. The is into smart software and “inventing” a fresh approach to the the plumbing required to make AI more like humanoids.

image

As I listened to the questions and answers, three factoids stuck in my mind:

  1. Extropic’s and its desire to just go really fast is a conscious decision shared among those involved with the company; that is, we know who wants to go fast because they work there or work at the firm. (I am not going to argue about the upside and downside of “going fast.” That will be another essay.)
  2. The downstream implications of the Extropic vision are secondary to the benefits of finding ways to avoid concentration of AI power. I think the idea is that absolute power produces outfits like the Google-type firms which are bedeviling competitors, users, and government authorities. Going fast is not a thrill for processes that require going slow.
  3. The decisions Extropic’s founder have made are bound up in a world view, personal behaviors for productivity, interesting foods, and learnings accreted over a stellar academic and business career. In short, Extropic embodies a philosophy.

Philosophy, therefore, influences decisions. So we come to my topic in this essay. I noted two different write ups about how informed people take decisions. I am not going to refer to philosophers popular in introductory college philosophy classes. I am going to ignore the uneven treatment of philosophers in Will and Ariel Durant’s Story of Philosophy. Nah. I am going with state of the art modern analysis.

The first online article I read is a survey (knowledge product) of the estimable IBM / Watson outfit or a contractor. The relatively current document is “CEO Decision Making in the Ago of AI.” The main point of the document in my opinion is summed up in this statement from a computer hardware and services company:

Any decision that makes its way to the CEO is one that involves high degrees of uncertainty, nuance, or outsize impact. If it was simple, someone else— or something else—would do it. As the world grows more complex, so does the nature of the decisions landing on a CEO’s desk.

But how can a CEO decide? The answer is, “Rely on IBM.” I am not going to recount the evolution (perhaps devolution) of IBM. The uncomfortable stories about shedding old employees (the term dinobaby originated at IBM according to one former I’ve Been Moved veteran). I will not explain how IBM’s decisions about chip fabrication, its interesting hiring policies of individuals who might have retained some fondness for the land of their fathers and mothers, nor the fancy dancing required to keep mainframes as a big money pump. Nope.

The point is that IBM is positioning itself as a thought leader, a philosopher of smart software, technology, and management. I find this interesting because IBM, like some Google type companies, are case examples of management shortcoming. These same shortcomings are swathed in weird jargon and buzzwords which are bent to one end: Generating revenue.

Let me highlight one comment from the 27 page document and urge you to read it when you have a few moments free. Here’s the one passage I will use as a touchstone for “decision making”:

The majority of CEOs believe the most advanced generative AI wins.

Oh, really? Is smart software sufficiently mature? That’s news to me. My instinct is that it is new information to many CEOs as well.

The second essay about decision making is from an outfit named Ness Labs. That essay is “The Science of Decision-Making: Why Smart People Do Dumb Things.” The structure of this essay is more along the lines of a consulting firm’s white paper. The approach contrasts with IBM’s free-floating global survey document.

The obvious implication is that if smart people are making dumb decisions, smart software can solve the problem. Extropic would probably agree and, were the IBM survey data accurate, “most CEOs” buy into a ride on the AI bandwagon.k

The Ness Labs’ document includes this statement which in my view captures the intent of the essay. (I suggest you read the essay and judge for yourself.)

So, to make decisions, you need to be able to leverage information to adjust your actions. But there’s another important source of data your brain uses in decision-making: your emotions.

Ah, ha, logic collides with emotions. But to fix the “problem” Ness Labs provides a diagram created in 2008 (a bit before the January 2022 Microsoft OpenAI marketing fireworks:

image

Note that “decide” is a mnemonic device intended to help me remember each of the items. I learned this technique in the fourth grade when I had to memorize the names of the Great Lakes. No one has ever asked me to name the Great Lakes by the way.

Okay, what we have learned is that IBM has survey data backing up the idea that smart software is the future. Those data, if on the money, validate the go-go approach of Extropic. Plus, Ness Labs provides a “decider model” which can be used to create better decisions.

I concluded that philosophy is less important than fostering a general message  that says, “Smart software will fix up dumb decisions.” I may be over simplifying, but the implicit assumptions about the importance of artificial intelligence, the reliability of the software, and the allegedly universal desire by big time corporate management are not worth worrying about.

Why is the cartoon philosopher worrying? I think most of this stuff is a poorly made road on which those jockeying for power and money want to drive their most recent knowledge vehicles. My tip? Look before crossing that information superhighway. Speeding myths can be harmful.

Stephen E Arnold, January 8, 2024

Pegasus Equipped with Wings Stomps Around and Leaves Hoof Prints

January 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The NSO Group’s infamous Pegasus spyware is in the news again, this time in India. Newsclick reveals, “New Forensic Report Finds ‘Damning Revelations’ of ‘Repeated’ Pegasus Use to Target Indian Scribes.” The report is a joint project by Amnesty International and The Washington Post. It was spurred by two indicators. First, routine monitoring exercise in June 2023 turned up traces of Pegasus on certain iPhones. Then, in October, several journalists and Opposition party politicians received Apple alerts warning of “State-sponsored attackers.” The article tells us:

“‘As a result, Amnesty International’s Security Lab undertook a forensic analysis on the phones of individuals around the world who received these notifications, including Siddharth Varadarajan and Anand Mangnale. It found traces of Pegasus spyware activity on devices owned by both Indian journalists. The Security Lab recovered evidence from Anand Mangnale’s device of a zero-click exploit which was sent to his phone over iMessage on 23 August 2023, and designed to covertly install the Pegasus spyware. … According to the report, the ‘attempted targeting of Anand Mangnale’s phone happened at a time when he was working on a story about an alleged stock manipulation by a large multinational conglomerate  in India.’”

This was not a first for The Wire co-founder Siddharth Varadarajan. His phone was also infected with Pegasus back in 2018, according to forensic analysis ordered by the Supreme Court of India. The latest findings have Amnesty International urging bans on invasive, opaque spyware worldwide. Naturally, The NSO Group continues to insist all its clients are “vetted law enforcement and intelligence agencies that license our technologies for the sole purpose of fighting terror and major crime” and that it has policies in place to prevent “targeting journalists, lawyers and human rights defenders or political dissidents that are not involved in terror or serious crimes.” Sure.

Meanwhile, some leaders of India’s ruling party blame Apple for those security alerts, alleging the “company’s internal threat algorithms were faulty.” Interesting deflection. We’re told an Apple security rep was called in and directed to craft some other, less alarming explanation for the warnings. Is this because the government itself is behind the spyware? Unclear; Parliament refuses to look into the matter, claiming it is sub judice. How convenient.

Cynthia Murrell, January 8, 2024

AI Ethics: Is That What Might Be Called an Oxymoron?

January 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

MSN.com presented me with this story: “OpenAI and Microsoft on Trial — Is the Clash with the NYT a Turning Point for AI Ethics?” I can answer this question, but that would spoil the entertainment value of my juxtaposition of this write up with the quasi-scholarly list of business start up resources. Why spoil the fun?

image

Socrates is lecturing at a Fancy Dan business school. The future MBAs are busy scrolling TikTok, pitching ideas to venture firms, and scrolling JustBang.com. Viewing this sketch, it appears that ethics and deep thought are not as captivating as mobile devices and having fund. Thanks, MSFT Copilot. Two tries and a good enough image.

The article asks a question which I find wildly amusing. The “on trial” write up states in 21st century rhetoric:

The lawsuit prompts critical questions about the ownership of AI-generated content, especially when it comes to potential inaccuracies or misleading information. The responsibility for losses or injuries resulting from AI-generated content becomes a gray area that demands clarification. Also, the commercial use of sourced materials for AI training raises concerns about the value of copyright, especially if an AI were to produce content with significant commercial impact, such as an NYT bestseller.

For more than two decades online outfits have been sucking up information which is usually slapped with the bright red label “open source information.”

The “on trial” essay says:

The future of AI and its coexistence with traditional media hinges on the resolution of this legal battle.

But what about ethics? The “on trial” write up dodges the ethics issue. I turned to a go-to resource about ethics. No, I did not look at the papers of the Harvard ethics professor who allegedly made up data for ethic research. Ho ho ho. Nope. I went to the Enchanting Trader and its list of 4000+ Essential Business Startup Database of information.

I displayed the full list of resources and ran a search for the word “ethics.” There was one hit to “Will Joe Rogan Ever IPO?” Amazing.

What I concluded is that “ethics” is not number one with a bullet among the resources of the 4000+  essential business start up items. It strikes me that a single trial about smart software is unlikely to resolve “ethics” for AI. If it does, will the resolution have the legs that Socrates’ musing have had. More than likely, most people will ask, “Who is Socrates?” or “What the heck are ethics?”

Stephen E Arnold, January 5, 2023

IBM: AI Marketing Like It Was 2004

January 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required. Note: The word “dinobaby” is — I have heard — a coinage of IBM. The meaning is an old employee who is no longer wanted due to salary, health care costs, and grousing about how the “new” IBM is not the “old” IBM. I am a proud user of the term, and I want to switch my tail to the person who whipped up the word.

What’s the future of AI? The answer depends on whom one asks. IBM, however, wants to give it the old college try and answer the question so people forget about the Era of Watson. There’s a new Watson in town, or at least, there is a new Watson at the old IBM url. IBM has an interesting cluster of information on its Web site. The heading is “Forward Thinking: Experts Reveal What’s Next for AI.”

IBM crows that it “spoke with 30 artificial intelligence visionaries to learn what it will take to push the technology to the next level.” Five of these interviews are now available on the IBM Web site. My hunch is that IBM will post new interviews, hit the new release button, post some links on social media, and then hit the “Reply” button.

image

Can IBM ignite excitement and capture the revenues it wants from artificial intelligence? That’s a good question, and I want to ask the expert in the cartoon for an answer. Unfortunately only customers and their decisions matter for AI thought leaders unless the intended audience is start ups, professors, and employees. Thanks, MSFT Copilot Bing thing. Good enough.

As I read the interviews, I thought about the challenge of predicting where smart software would go as it moved toward its “what’s next.” Here’s a mini-glimpse of what the IBM visionaries have to offer. Note that I asked Microsoft’s smart software to create an image capturing the expert sitting in an office surrounded by memorabilia.

Kevin Kelly (the author of What Technology Wants) says: “Throughout the business world, every company these days is basically in the data business and they’re going to need AI to civilize and digest big data and make sense out of it—big data without AI is a big headache.” My thought is that IBM is going to make clear that it can help companies with deep pockets tackle these big data and AI them. Does AI want something, or do those trying to generate revenue want something?

Mark Sagar (creator of BabyX) says: “We have had an exponential rise in the amount of video posted online through social media, etc. The increased use of video analysis in conjunction with contextual analysis will end up being an extremely important learning resource for recognizing all kinds of aspects of behavior and situations. This will have wide ranging social impact from security to training to more general knowledge for machines.” Maybe IBM will TikTok itself?

Chieko Asakawa (an unsighted IBM professional) says: “We use machine learning to teach the system to leverage sensors in smartphones as well as Bluetooth radio waves from beacons to determine your location. To provide detailed information that the visually impaired need to explore the real world, beacons have to be placed between every 5 to 10 meters. These can be built into building structures pretty easily today.” I wonder if the technology has surveillance utility?

Yoshua Bengio (seller of an AI company to ServiceNow) says: “AI will allow for much more personalized medicine and bring a revolution in the use of large medical datasets.” IBM appears to have forgotten about its Houston medical adventure and Mr. Bengio found it not worth mentioning I assume.

Margaret Boden (a former Harvard professor without much of a connection to Harvard’s made up data and administrative turmoil) says: “Right now, many of us come at AI from within our own silos and that’s holding us back.” Aren’t silos necessary for security, protecting intellectual property, and getting tenure? Probably the “silobreaking” will become a reality.

Several observations:

  1. IBM is clearly trying hard to market itself as a thought leader in artificial intelligence. The Jeopardy play did not warrant a replay.
  2. IBM is spending money to position itself as a Big Dog pulling the AI sleigh. The MIT tie up and this AI Web extravaganza are evidence that IBM is [a] afraid of flubbing again, [b] going to market its way to importance, [c] trying to get traction as outfits like OpenAI, Mistral, and others capture attention in the US and Europe.
  3. IBM’s ability to generate awareness of its thought leadership in AI underscores one of the challenges the firm faces in 2024.

Net net: The company that coined the term “dinobaby” has its work cut out for itself in my opinion. Is Jeopardy looking like a channel again?

Stephen E Arnold, January 5, 2024

Meta Never Met a Kid Data Set It Did Not Find Useful

January 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Adults are ripe targets for data exploitation in modern capitalism. While adults fight for their online privacy, most have rolled over and accepted the inevitable consumer Big Brother. When big tech companies go after monetizing kids, however, that’s when adults fight back like rabid bears. Engadget writes about how Meta is fighting against the federal government about kids’ data: “Meta Sues FTC To Block New Restrictions On Monetizing Kids’ Data.”

Meta is taking the FTC to court to prevent them from reopening a 2020 $5 billion landmark privacy case and to allow the company to monetize kids’ data on its apps. Meta is suing the FTC, because a federal judge ruled that the FTC can expand with new, more stringent rules about how Meta is allowed to conduct business.

Meta claims the FTC is out for a power grab and is acting unconstitutionally, while the FTC reports the claimants consistently violates the 2020 settlement and the Children’s Online Privacy Protection Act. FTC wants its new rules to limit Meta’s facial recognition usage and initiate a moratorium on new products and services until a third party audits them for privacy compliance.

Meta is not a huge fan of the US Federal Trade Commission:

“The FTC has been a consistent thorn in Meta’s side, as the agency tried to stop the company’s acquisition of VR software developer Within on the grounds that the deal would deter "future innovation and competitive rivalry." The agency dropped this bid after a series of legal setbacks. It also opened up an investigation into the company’s VR arm, accusing Meta of anti-competitive behavior."

The FTC is doing what government agencies are supposed to do: protect its citizens from greedy and harmful practices like those from big business. The FTC can enforce laws and force big businesses to pay fines, put leaders in jail, or even shut them down. But regulators have been decades ramping up to take meaningful action. The result? The thrashing over kiddie data.

Whitney Grace, January 5, 2024

YouTube: Personal Views, Policies, Historical Information, and Information Shaping about Statues

January 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have never been one to tour ancient sites. Machu Pichu? Meh. The weird Roman temple in Nimes? When’s lunch? The bourbon trail? You must be kidding me! I have a vivid memory of visiting the US Department of Justice building for a meeting, walking through the Hall of Justice, and seeing Lady Justice covered up. I heard that the drapery cost US$8,000. I did not laugh, nor did I make any comments about cover ups at that DoJ meeting or subsequent meetings. What a hoot! Other officials have covered up statues and possibly other disturbing things.

I recall the Deputy Administrator who escorted me and my colleague to a meeting remarking, “Yeah, Mr. Ashcroft has some deeply held beliefs.” Yep, personal beliefs, propriety, not offending those entering a US government facility, and a desire to preserve certain cherished values. I got it. And I still get it. Hey, who wants to lose a government project because some sculpture artist type did not put clothes on a stone statue?

image

Are large technology firms in a position to control, shape, propagandize, and weaponize information? If the answer is, “Sure”, then users are little more than puppets, right? Thanks, MSFT Copilot Bing thing. Good enough.

However, there are some people who do visit historical locations. Many of these individuals scrutinize the stone work, the carvings, and the difficulty of moving a 100 ton block from Point A (a quarry 50 miles away) to Point B (a lintel in the middle of nowhere). I am also ignorant of art because I skipped Art History in college. I am clueless about ancient history. (I took another useless subject like a math class.) And many of these individuals have deep-rooted beliefs about the “right way” to present information in the form of stone carvings.

Now let’s consider a YouTuber who shoots videos in temples in southeast Asia. The individual works hard to find examples of deep meanings in the carvings beyond what the established sacred texts present. His hobby horse, as I understand the YouTuber, is that ancient aliens, fantastical machines, and amazing constructions are what many carvings are “about.” Obviously if one embraces what might be received wisdom about ancient texts from Indian and adjacent / close countries, the presentation of statues with disturbing images and even more troubling commentary is a problem. I think this is the same type of problem that a naked statue in the US Department of Justice posed.

The YouTuber allegedly is Praveen Mohan, and his most recent video is “YouTube Will Delete Praveen Mohan Channel on January 31.” Mr. Mohan’s angle is to shoot a video of an ancient carving in a temple and suggest that the stone work conveys meanings orthogonal to the generally accepted story about giant temple carvings. From my point of view, I have zero clue if Mr. Mohan is on the money with his analyses or if he is like someone who thinks that Peruvian stone masons melted granite for Cusco’s walls. The point of the video is that by taking pictures of historical sites and their carvings violates YouTube’s assorted rules, regulations, codes, mandates, and guidelines.

Mr. Mohan expresses the opinion that he will be banned, blocked, downchecked, punished, or made into a poster child for stone pornography or some similar punishment. He shows images which have been demonetized. He shows his “dashboard” with visual proof that he is in hot water with the Alphabet Google YouTube outfit. He shows proof that his videos are violating copyright. Okay. Maybe a reincarnated stone mason from ancient times has hired a lawyer, contacted Google from a quantum world, and frightened the YouTube wizards? I don’t know.

Several question arose when my team and I discussed this interesting video addressing YouTube’s actions toward Mr. Mohan. Let me share several with you:

  1. Is the alleged intentional action against Mr. Mohan motivated by Alphabet Google YouTube managers with roots in southeast Asia? Maybe a country like India? Maybe?
  2. Is YouTube going after Mr. Mohan because his making videos about religious sites, icons, and architecture is indeed a violation of copyright? I thought India was reasonably aggressive in its enforcement of its laws? Has Alphabet Google YouTube decided to help out India and other countries with ancient art by southeast Asia countries’ ancient artisans?
  3. Has Mr. Mohan created a legal problem for YouTube and the company is taking action to shore up its legal arguments should the naked statue matter end up in court?
  4. Is Mr. Mohan’s assertion about directed punishment accurate?

Obviously there are many issues in play. Should one try to obtain more clarification from Alphabet Google YouTube? That’s a great idea. Mr. Mohan may pursue it. However, will Google’s YouTube or the Alphabet senior management provide clarification about policies?

I will not hold my breath. But those statues covered up in the US Department of Justice reflected one person’s perception of what was acceptable. That’s something I won’t forget.

Stephen E Arnold, January 4, 2024

Does Amazon Do Questionable Stuff? Sponsored Listings? Hmmm.

January 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Amazon, eBay, other selling platforms allow vendors to buy sponsored ads or listings. Sponsored ads or listings promote products and services to the top of search results. It’s similar to how Google sells ads. Unfortunately Google’s search results are polluted with more sponsored ads than organic results. Sponsored ads might not be a wise investment. Pluralistic explains that sponsored ads are really a huge waste of money: “Sponsored Listings Are A Ripoff For Sellers.”

Amazon relies on a payola sponsored ad system, where sellers bid to be the top-ranked in listings even if their products don’t apply to a search query. Payola systems are illegal but Amazon makes $31 billion annually from its system. The problem is that the $31 billion is taken from Amazon sellers who pay it in fees for the privilege to sell on the platform. Sellers then recoup that money from consumers and prices are raised across all the markets. Amazon controls pricing on the Internet.

Another huge part of a seller’s budget is for Amazon advertising. If sellers don’t buy ads in searches that correspond to their products, they’re kicked off the first page. The Amazon payola system only benefits the company and sellers who pay into the payola. Three business-school researchers Vibhanshu Abhishek, Jiaqi Shi and Mingyu Joo studied the harmful effects of payolas:

“After doing a lot of impressive quantitative work, the authors conclude that for good sellers, showing up as a sponsored listing makes buyers trust their products less than if they floated to the top of the results "organically." This means that buying an ad makes your product less attractive than not buying an ad. The exception is sellers who have bad products – products that wouldn’t rise to the top of the results on their own merits. The study finds that if you buy your mediocre product’s way to the top of the results, buyers trust it more than they would if they found it buried deep on page eleventy-million, to which its poor reviews, quality or price would normally banish it. But of course, if you’re one of those good sellers, you can’t simply opt not to buy an ad, even though seeing it with the little "AD" marker in the thumbnail makes your product less attractive to shoppers. If you don’t pay the danegeld, your product will be pushed down by the inferior products whose sellers are only too happy to pay ransom.”

It’s getting harder to compete and make a living on online selling platforms. It would be great if Amazon sided with the indy sellers and quit the payola system. That will never happen.

Whitney Grace, January 4, 2024

23AndMe: The Genetics of Finger Pointing

January 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Well, well, another Silicon Valley outfit with Google-type DNA relies on its hard-wired instincts. What’s the situation this time? “23andMe Tells Victims It’s Their Fault That Their Data Was Breached” relates a now a well-known game plan approach to security problems. What’s the angle? Here’s what the story in Techcrunch asserts:

image

Some rhetorical tactics are exemplified by children who blame one another for knocking the birthday cake off the counter. Instinct for self preservation creates these all-too-familiar situations. Are Silicon Valley-type outfit childish? Thanks, MSFT Copilot Bing thing. I had to change the my image request three times to avoid the negative filter for arguing children. Your approach is good enough.

Facing more than 30 lawsuits from victims of its massive data breach, 23andMe is now deflecting the blame to the victims themselves in an attempt to absolve itself from any responsibility…

I particularly liked this statement from the Techcrunch article:

And the consequences? The US legal processes will determine what’s going to happen.

After disclosing the breach, 23andMe reset all customer passwords, and then required all customers to use multi-factor authentication, which was only optional before the breach. In an attempt to pre-empt the inevitable class action lawsuits and mass arbitration claims, 23andMe changed its terms of service to make it more difficult for victims to band together when filing a legal claim against the company. Lawyers with experience representing data breach victims told TechCrunch that the changes were “cynical,” “self-serving” and “a desperate attempt” to protect itself and deter customers from going after the company.

Several observations:

  1. I particularly like the angle that cyber security is not the responsibility of the commercial enterprise. The customers are responsible.
  2. The lack of consequences for corporate behaviors create opportunities for some outfits to do some very fancy dancing. Since a company is a “Person,” Maslow’s hierarchy of needs kicks in.
  3. The genetics of some firms function with little regard for what some might call social responsibility.

The result is the situation which not even the original creative team for the 1980 film Airplane! (Flying High!) could have concocted.

Stephen E Arnold, January 4, 2024

No Digital Map and Doomed to Wander Clueless

January 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I am not sure if my memory is correct. I believe that some people have found themselves in a pickle when the world’s largest online advertising outfit produces “free” maps. The idea is that cost cutting, indifferent Googlers, and high school science club management methods cause a “free” map to provide information which may not match reality. I do recall on the way to the home of the fellow responsible for WordStar (a word processing program), an online search system, and other gems from the early days of personal computers. Google Maps suggested I drive off the highway, a cliff, and into the San Francisco Bay. I did not follow the directions from the “do no evil” outfit. I drove down the road, spotted a human, and asked for directions. But some people do not follow my method.

image

No digital maps. No clue. Thanks, MSFT Copilot Bing thing.

Quairading Shire Erects Signs Telling Travelers to Ignore GPS Maps Including Google” includes a great photo of what appears to be a large sign. The sign says:

Your GPS Is Wrong. This is Not the Best Route to Perth. Turn Around and Travel via the Quairading-York Road.

That’s clear and good advice. As I recall, I learned on one of my visits to Australia that most insects, reptiles, mammals, and fish can kill. Even the red kangaroo can become a problem, which is — I assume — that some in Australia gun them down. Stay on the highway and in your car. That’s my learning from my first visit.

The write up says:

The issue has frustrated the Quairading shire for the past eight years.

Hey, the Google folks are busy. There are law suits, the Red Alert thing, and the need to find a team which is going nowhere fast like the dual Alphabet online map services, Maps and Waze.

Net net: Buy a printed book of road maps and ask for directions. The problem is that those under the age of 25 may not be able to read or do what’s called orienteering. The French Foreign Legion runs a thorough program, and it is available for those who have not committed murder, can pass a physical test, and enjoy meeting people from other countries. Oh, legionnaires do not need a mobile phone to find their way to a target or the local pizza joint.

Stephen E Arnold, January 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta