Let Technology Solve the Problem: Ever Hear of Russell and His Paradox?

September 21, 2022

I read “You Can’t Solve AI Security Problems with More AI.” The main idea, in my opinion, is that Russell’s Paradox is alive and well. The article states:

When you’re engineering for security, a solution that works 99% of the time is no good. You are dealing with adversarial attackers here. If there is a 1% gap in your protection they will find it—that’s what they do!

Obvious? Yep. That one percent is an issue. But the belief that technology can solve a problem is more of a delusional, marketing-oriented approach to reality. Some informed people are confident that one percent does not make much of a difference. Maybe? But what about a smart software system that is generating outputs with probabilities greater than one percent. Can technology address these issues? The answer offered by some is, “Sure, we have added this layer, that process, and these procedures to deliver accuracy in the 85, 90, or 95 percent range. Yes, that’s “confidence.”

The write up points out:

Trying to prevent AI attacks with more AI doesn’t work like this. If you patch a hole with even more AI, you have no way of knowing if your solution is 100% reliable. The fundamental challenge here is that large language models remain impenetrable black boxes. No one, not even the creators of the model, has a full understanding of what they can do.

Eeep.

The article has what I think is a quite helpful suggestion; to wit:

There may be systems that should not be built at all until we have a robust solution.

What if we generalize beyond the issue of cyber security? What if we think about the smart software “fixing up” the problems in today’s zippy digitized world?

Rethink, go slow, and remembering Russell’s Law? Not a chance.

Stephen E Arnold, September 21, 2022

Darktrace–Thoma Bravo Deal: An Antigen Reaction?

September 21, 2022

Darktrace is one of the cyber threat detection outfits to which I pay some attention. I read “Darktrace Shares Plunge After Thoma Bravo Acquisition Falls Apart.”

The article quotes an expert as saying:

“I don’t think Thoma Bravo is backing off of Darktrace because of valuations,” he [Richard Stiennon, chief research analyst at IT-Harvest] says. “I think strategically there is not a clear market for the AI-enhanced threat hunting that Darktrace touts. The market is pretty much equal to Darktrace’s revenue today.”

My take on the deal is that the cyber threat detection and cyber threat information services are not convincing some skeptical prospects. News like the teen who compromised the Uber taxi service and the sharp rise in ransomware attacks has created some Nervous Nellies. Multi-persona phishing and old fashioned social engineer work in today’s work-from-home world. Plus, there is nothing like a bundle of cash promised to an insider who might be tempted to exchange access credentials for a new Tesla or a shopping spree at Costco.

Darktrace has done a masterful job of marketing. The Bayesian methods work reasonably well in certain use cases. Quite a chunk of change has been spent buying and marketing cyber related businesses.

One report (“Shares Plunge As US Private Equity Titan Backs Out of Darktrace Takeover“) said:

Darktrace revenue grew 45.7 per cent in the financial year to 30 June, while the customer base swelled 32.1 per cent year-over-year. However, the firm did note an accounting mishap, stating that $3.8m of revenue it had been recognising in the full year, including a portion recognised and reported in its unaudited half year results, was related to prior periods and should instead be recognised in full year 2021 results. This reallocation would reduce revenue reported this year to $415.5m from the $419.3m that was expected.

And cyber crime is at an all time high, but I am not sure any firm, including Darktrace, has cracked the code.

Stephen E Arnold, September 21, 2022

How Quickly Will Rights Enforcement Operations Apply Copyright Violation Claims to AI/ML Generated Images?

September 20, 2022

My view is that the outfits which use a business model to obtain payment for images without going through an authorized middleman or middlethem (?) are beavering away at this moment. How do “enforcement operations” work? Easy. There is old and new code available to generate a “digital fingerprint” for an image. You can see how these systems work. Just snag an image from Bing, Google, or some other picture finding service. Save it to you local drive. Then navigate — let’s use the Google, shall we? — to Google Images and search by image. Plug in the location on your storage device and the system will return matches. TinEye works too. What you see are matches generated when the “fingerprint” of the image you upload matches a fingerprint in the system’s “memory.” When an entity like a SPAC thinking Getty Images, PicRights, or similar outfit (these folks have conferences to discuss methods!) spots a “match,” the legal eagles take flight. One example of such a legal entity making sure the ultimate owner of the image and the middlethem gets paid, is — I think — something called “Higbee.” I remember the “bee” because the named reminded me of Eleanor Rigby. (The mind is mysterious, right?) The offender such as a church, a wounded veteran group, or a clueless blogger about cookies is notified of an “infringement.” The idea is that the ultimate owner gets money because why not? The middlethem gets money too. I think the legal eagle involved gets money because lawyers…

I read “AI Art Is Here and the World Is Already Different. How We Work — Even Think — Changes When We Can Instantly Command Convincing Images into Existence” takes a stab at explaining what the impact of AI/ML generated art will be. The write up nicks the topic, but it does not buy the pen and nib into the heart of the copyright opportunity.

Here’s a passage I noted from the cited article:

In contrast with the glib intra-VC debate about avoiding human enslavement by a future superintelligence, discussions about image-generation technology have been driven by users and artists and focus on labor, intellectual property, AI bias, and the ethics of artistic borrowing and reproduction.

Close but not a light saber cutting to the heart of what’s coming.

There is a long and growing list of things people can command into existence with their phones, through contested processes kept hidden from view, at a bargain price: trivia, meals, cars, labor. The new AI companies ask, Why not art?

Wrong question!

My hunch is that the copyright enforcement outfits will gather images, find a way to assign rights, and then sue the users of these images because the users did not know that the images were part of the enforcers furniture of a lawsuit.

Fair? Soft fraud? Something else?

The cited article does not consider these questions. Perhaps someone with a bit more savvy and a reasonably calibrated moral and ethical compass should?

Stephen E Arnold, September 20, 2022

How Does Googzilla Smother Competition: A Big Pile of Money Perhaps?

September 20, 2022

I am not a fan of short form, addictive-algorithmic games. Some are. Parents should be concerned about the usage of TikTok. I am not. I know that as schools in the US suffer shortages of teachers, there are solutions proven to work for the progeny of the upper one percent; for example:

  1. Camping at Kumon Math and Reading Center or a similar for-fee tutoring outfit’s classes
  2. Studying with a more informed individual, one-on-one just like a chess grandmaster’s coach
  3. Sitting down to a high powered computing device with a gigabit Internet connection and a supervisor, preferably a nun who once taught at a Jesuit university of a Chinese family’s really smart and demanding grandmother (nainai)
  4. Asking mumsey or popsey for help because the learner’s parents have advanced degrees
  5. Combining techniques.

The GOOG wants to be a player in the short form, attention eroding, baloney stuffed videos served up via a magical, smart software machine.

TikTok, Zuckbook, and others are going to try to the old fashioned way. Hard work, clean living, studying ethical business methods, and probably a prayer to either Euler (the god of mathies) or some other (probably less mathy) deity.

YouTube Shorts Could Steal TikTok’s Thunder with a Better Deal for Creators” reveals in real news style the Google’s method; to wit:

YouTube Shorts is gearing up to announce an ad revenue sharing model that could revolutionize short form video and give TikTok a run for its money — literally… The company is reportedly set to announce a Partner Program-like ad revenue sharing model on Tuesday at its Made on YouTube event. If the rumors are true, YouTube Shorts creators would get 45% of ad revenue.

The source article has more quotes and factoids, but for my argument, the use of money is the key point. It seems only fair that a company with a lot of money and a stellar track record of making me too products into big winners and solving the difficult problems of life like death might just use cash.

Simple, easy to understand, and very, very Googley.

Will it work? Sure, if regulators shift into gear and the children of those regulators abandon TikTok, the idea is a winner.

What is the sound a suffering Googzilla makes? For me it is the riffing of fat stacks of $100 bills.

Stephen E Arnold, September 20, 2022

Adobe: Figma May Channel Framemaker. Yikes!

September 20, 2022

I read a number of the articles about Adobe (yep, the Photoshop outfit) and its purchase of Figma. Many dinobabies use Adobe products. The youngsters are into mobile apps or Web apps when crafting absolutely remarkable online products and services. Adobe is a dinobaby product. Do you use channels? Yeah, right. I gave a lecture at the University of Michigan about something in which no one was interested except me and one professor who knew about my indexing methods. I do recall, however, talking with students at a free lunch which attracted a couple of people from the art and design or design and architecture or some similarly mystical fields. I sat with two of these individuals and learned that Adobe software was taught in their design course. I asked, “Why?” The answer was, “To get a job you have to know Photoshop and Illustrator.” Okay, because these interesting people were not going to get involved in machine indexing or what whippersnappers and the smartest people in the world call “metadata.”

One of the write ups about Adobe, a software subscription company, was “Adobe’s Figma Acquisition Is a $20 Billion Bet to Control the Entire Creative Market.” The write up states with incredible insight, confidence, and design savoir faire:

Adobe says the current plan is essentially for nothing to change.

Okay, I believe this statement. I believe everything I read on Silicon Valley-type real news services.

I would point to Adobe’s masterful handling of Framemaker. You use that program, probably more often than you use Adobe’s Channels controls.

Framemaker is a desktop publishing tool purpose built decades ago to make it possible to produce long, technical documents easily and quickly. Many operations could be automated. But Framemaker was a killer to learn. How often do you Unix key combinations in your Windows applications? Framemaker did little to make certain things easy; for example, having a footnote that was longer than the hard coded limit in Framemaker or changing a color without navigating through absolutely crazy color libraries and clicking away like mad. A newcomer to Framemaker has zero clue about creating a new document and not having weird stuff happen with headers, footers, fonts, etc. Nevertheless, when one had to crank out documentation for a new tank or output the technical details of materials specifications, Framemaker was and in my office still is the go too software. FYI: We stopped buying Framemaker after Adobe foisted one upgrade on me. I won’t detail my problems with the weird changes which made the software more difficult to use and set up for a production job. I uninstalled the Adobe flavor and went back to Framemaker 7.2, which was released a decade ago and was a gentle bug fix. But after Framemaker 7.2, the software was lost in space. I called it “outer limits” code.

Adobe purchased Framemaker and the product has not made much progress, maybe zero progress. The cost is now about $360 per year. You can read about its Adobe magical features here. There’s only one problem: The software has lost its way. Adobe wants everyone to use InDesign, a software ill-suited to crank out documentation for a weapons system in my opinion.

What will happen to Figma once in the new evergreen revenue oriented Adobe? I fear that Figma, which is unsuited for the type of content I produce, will become:

  1. Adobe’s version of Google’s Dodgeball and get kicked into a corner
  2. A Framemaker destined to disappoint dinobabies like me
  3. The greatest thing since Photoshop was equipped with a feature to open Illustrator files and not immediately crash.

The future will be exciting. Goodness, channeling Framemaker. What a thought.

Stephen E Arnold, September 20, 2022

Who Needs Books? Plus They Burn As Well As the Scrolls in the Library at Alexandria

September 20, 2022

Book banning is not new. Ever since humans could think and publish controversial ideas, literature has been banned. In ancient times, Abrahamic religious documents were deemed taboo. The last hundred years showed book banning examples in Nazi Germany, the socialist Soviet Union, and communist China continues to ban many works. The United States should be free of this quandary given the First Amendment in the Bill of Rights, but “over-concerned” people are advocating for the removal of titles. The Grid runs down the current state of book bans in, “Book Banning In US Schools Has Reached An All-Time High: What This Means, And How We Got Here.”

In the past, most books that were banned dealt with magic and depictions of sex. Nowadays the books challenged the most are about gender and sexual orientation, ethnic diversity, and alternative takes on US history:

“Among the 10 most-challenged titles of 2021 were those from prominent Black writers Ibram X. Kendi, Jason Reynolds and Angie Thomas, according to the ALA. And five of the top 10 were challenged specifically because of their LGBTQ content.

From a broader perspective, of the 1,000-plus books banned from July 2021 to March 2022, 41 percent had main characters of color, 22 percent directly addressed race and racism, and 33 percent directly included LGBTQ themes and characters.”

Many groups working to ban titles are trying to protect childhood innocence. Some are not against these titles being published, but believe they do not belong in schools. It is hard to apply First Amendment rights to school curricula, because schools are under the control of school administrations. These administrations institute curricula and can remove books deemed “offensive.” Groups that do sue to take books out of libraries and bookstores are facing a losing battle because they are protected by the First Amendment.

While the Internet allows kids to access books and other information, Internet access is not 100% universal for people in rural and low-income areas. When a book is banned from libraries or schools, kids will never have access to it.

Banning books makes them more popular. Sometimes the banning helps books perform better than if they had been left alone. Book banning does not serve any purpose other than perpetrating willful ignorance.

Whitney Grace, September 20, 2022

Ad Duopoly: Missing Some Points?

September 19, 2022

The newspaper disguised as a magazine published “The $300B Google Meta Advertising Duopoly Is Under Attack” is interesting. The write up is what I would expect from a couple of MBAs beavering away a blue chip consulting firm. If you are curious, read the story for which you will have to pay. The story sparked some comments on HackerNews. These are interesting and some of the comments contain more insightful information than the Under Attack write up itself. Here’s a few comments to illustrate this point:

  • Sam Willis: To some extent I disagree with this, not that Google+Meta are under attack, but that the threat is coming from competitors. I’ve spent most of the last 10 years earning my living from an e-commerce business I own. The online advertising industry is unrecognisable from when we started. My thesis, in beef, is that the industries excessive uses of personalised data and tracking lead to increased regulation, and then a massive pivot to even more “AI” as a means to circumvent that (to some extent). The AI in the ad industry now, I believe, is detrimental to the advertiser. It’s now just one big black box, you put money in one side and get traffic out the other. The control and useful tracking (what actual search terms people are using, proper visible conversion tracking of an ad) is now almost non-existent. As an advertiser your livelihood is dependent on an algorithm, not skill, not intuition, not experience, not even track record. Facebook, Google and the rest of the industry were so driven by profit at all cost, and at the expense of long term thinking, they shot themselves in the foot. Advertisers are searching for alternatives, but they are all the same.
  • Justin Baker 84: Usually people need to get ripped off a few times before they accept that fact that Google is no longer a good actor.
  • Missedthecue: I get billed for so many accidental clicks.
  • Heavyset: Google Knows Best™ and lack of real competition or regulation means they can do whatever they want.
  • Prepend: I remember talking to some friends in Google and but estimated their error/fraud rate to be about 1/3 of ad revenue. But they have no motivation to fix it and no one outside Google has the data to tell.
  • MichaelCollins: Organizations that are trying to do something disreputable or shameful (or just something that could be construed that way by a nontrivial portion of the population) often come up with sweet little lies about their motives that help their employees sleep better at night. It’s not about making money by serving ads, it’s about “organizing the world’s data”. It’s not about winning defense contracts to put military hardware into space, it’s about “colonizing mars to save humanity”. It’s not about printing money by getting poor people to sign up for 50,000% APR payday loans, it’s about “providing liquidity to undeserved communities”. Etc.
  • Addicted: If you don’t pay Google/Facebook you’re absolutely screwed. You will lose no matter how good the product is. What this actually means is that now companies have to pay a Google/Meta tax simply to enter the playing field. And once they enter the playing field. And once you enter the playing field, the only winners will be the ones who pay them the highest amount of money. So a smaller business, which in the past could potentially use some ingenuity, or target a specific niche audience to get some traction and then build word of mouth and let the product do the talking, doesn’t even stand a chance now because they simply cannot differentiate themselves as your exposure is entirely dependent on how much money you give Google/Meta.

Dozens of useful comments appear in the HackerNews post. Worth scanning them in my opinion.

Stephen E Arnold, September 19, 2022

US Big Tech and Little Tech: Are the Priorities Clear?

September 19, 2022

Just a quick note to document the run up to Big Changes in 2023.

First, the estimable, much loved and respected, togetherness outfit made some of its priorities clear. I read “Meta Disbands Responsible Innovation Team, Spreads It Out over Facebook and Co.” [I think the “co” means company, whatever.]

The article states:

Meta spokesman Eric Porterfield told The Register that, rather than ending the efforts of the RIT, the disbanding will see “the vast majority” of the 20-person team moved into other areas at Meta “to help us scale our efforts by deploying dedicated experts directly into product areas, rather than as a standalone team.”  Per Porterfield, Meta’s official statement on the matter is that the work done by the RIT is more of a priority now – not less, as a disbanding of the team would suggest.

Yep, disbanding means amping up. One priority is clear to me: Dump a central group and bury what I think is an underfunded, understaffed, and mostly ignored function. Is Facebook saying, “Okay, find someone to pin a specific issue on now, you Silicon Valley real journalists and pesky Congress-people.”P

The second item about priorities is from everyone’s favorite work around for MasterCard and Visa issues. I read “Patreon Cut At Least Five People From Its Security Team.” The article reports:

“As part of a strategic shift a portion of our security program, we have parted ways with five employees,” said Patreon in an emailed statement attributed to the company’s U.S. policy head, Ellen Satterwhite …

What’s this say to me? How about Patreon management perceives that its security is really good. And that the layoffs don’t have an impact on security. Therefore, why not reduce the cyber security team? That makes sense in Patreon-land; here in Harrod’s Creek, Kentucky, not so much.

The third item concerns digital plumbing. I noted “Cisco Says It Won’t Patch These Dangerous VPN Security Flaws in Its SMB Routers.” The owner of the Talos security operation is okay with some security flaws. The article asserts:

Cisco has said it won’t be issuing any further updates for three vulnerable routers which could apparently allow an unauthenticated, remote attacker to bypass authentication controls and access the IPSec VPN network.

Good decision, right?

Net net: The priorities for 2023 are clear:

  1. Reorganize so it is tough to pinpoint who is doing what
  2. Assume that cost cutting will keep security in tip top share or at least less likely to be fixed up when a gap is discovered by bad actors
  3. Rationalize away doing security while sending a signal to bad actors that certain devices are vulnerable.

Outstanding management presages a super duper 2023.

Stephen E Arnold, September  19, 2022

Meta: The Efflorescence of Zucking

September 19, 2022

Years ago a colleague of mine and I spent a couple of days with Pat Gunkel. Ah, you don’t know him? Depending on whom one asks, he was either an interesting person or a once-in-a-generation genius. I have in front of me a copy of “The Efflorescent World View.” (Want to buy a copy handed to me by Mr. Gunkel? Just write us at benkent2020 at yahoo dot com. It’s a collectible because only a few of Mr. Gunkel’s books are findable in our wonderful, search-tastic online world.) The image below shows what an actual Gunkel book from his office looks like:

gunkel cover 300 pix

I thought about Mr. Gunkel when I read “Meta Shares Plunged 14% This Week, Falling Close to Their Pandemic Low.” Mr. Gunkel’s method involved creating lists. Lots of lists. I think he would have found the challenge of cataloging Mr. Zuck’s impressive achievements; for example:

  • The reference stock plunge
  • Implementing an employee management technique in which employees learn that some of them should not be Zuckers
  • The “make friends with your neighbors in Hawaii” actions
  • The elimination of personal cubes and work spaces in Meta’s offices
  • The renaming of the company to celebrate the billions invested in what eGame developers have been doing for — yeah, how long — for decades
  • Thinking about charging for its unpopular clone of a really popular app. (Genius with a twist of Zuck? Yes!)

But what’s an “efflorescence”? Some may ask. I have a big fat book on the subject. Let me summarize: One might say a gradual flowering. Others might suggest that it represents a culmination.

My hunch is that the year 2022 marks the efflorescence of the Zuckbook, the knock off of TikTok, and the push to make WhatsApp a superapp for good and evil.

The efflorescence of Zucking. Too bad Mr. Gunkel is no longer with us to undertake this project. He was, I must say, very interesting.

Stephen E Arnold, September 19, 2022

Techno-Confidence: Unbounded and Possibly Unwarranted

September 19, 2022

As technology advances, people speculate about how it will change society, especially in the twentieth century. We were supposed to have flying cars, holograms would be a daily occurrence, and automation would make most jobs obsolete. Yet here we are in the twenty-first century and futurists only got some of the predictions right. It begs the question if technology developers, such as deep learning researchers, are overhyping their industry. AI Snake Oil explores the idea in, “Why Are Deep Learning Technologists So Overconfident?”

According to the authors Arvind Narayanan and Says Kapoor, the hype surrounding deep learning is similar to past and present scientific dogma: “a core belief that binds the group together and gives it its identity.” Deep learning researchers’ dogma is that learning problems can be resolved by collecting training examples. It sounds great in theory, but simply collecting training examples is not a complete answer.

It does not take much investigation to discover that deep learning training datasets are rich in biased and incomplete information. Deep learning algorithms are incapable of understanding perception, judgment, and social problems. Researchers describe the algorithms as great prediction tools, but it is the furthest thing from the truth.

Deep learning researchers are aware of the faults in the technology and are stuck in the same us vs. them mentality that inventors have found themselves in for centuries. Deep learning perceptions are not based on many facts, but on common predictions other past technologies faced:

“This contempt is also mixed with an ignorance of what domain experts actually do. Technologists proclaiming that AI will make various professions obsolete is like if the inventor of the typewriter had proclaimed that it will make writers and journalists obsolete, failing to recognize that professional expertise is more than the externally visible activity. Of course, jobs and tasks have been successfully automated throughout history, but someone who doesn’t work in a profession and doesn’t understand its nuances is in a poor position to make predictions about how automation will impact it.”

Deep learning will be the basis for future technology, but it has a long way to go before it is perfected. All advancements go through trial and error. Deep learning researchers need to admit their mistakes, invest funding with better datasets, and experiment. Practice makes perfect! When smart software goes off the rails, there are PR firms to make everything better again.

Whitney Grace, September 19, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta