Profits Over Promises: IBM Sells Facial Recognition Tech to British Government

September 18, 2023

Just three years after it swore off any involvement in facial recognition software, IBM has made an about-face. The Verge reports, “IBM Promised to Back Off Facial Recognition—Then it Signed a $69.8 Million Contract to Provide It.” Amid the momentous Black Lives Matter protests of 2020, IBM’s Arvind Krishna wrote a letter to Congress vowing to no longer supply “general purpose” facial recognition tech. However, it appears that is exactly what the company includes within the biometrics platform it just sold to the British government. Reporter Mark Wilding writes:

“The platform will allow photos of individuals to be matched against images stored on a database — what is sometimes known as a ‘one-to-many’ matching system. In September 2020, IBM described such ‘one-to-many’ matching systems as ‘the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights.'”

In the face of this lucrative contract IBM has changed its tune. It now insists one-to-many matching tech does not count as “general purpose” since the intention here is to use it within a narrow scope. But scopes have a nasty habit of widening to fit the available tech. The write-up continues:

“Matt Mahmoudi, PhD, tech researcher at Amnesty International, said: ‘The research across the globe is clear; there is no application of one-to-many facial recognition that is compatible with human rights law, and companies — including IBM — must therefore cease its sale, and honor their earlier statements to sunset these tools, even and especially in the context of law and immigration enforcement where the rights implications are compounding.’ Police use of facial recognition has been linked to wrongful arrests in the US and has been challenged in the UK courts. In 2019, an independent report on the London Metropolitan Police Service’s use of live facial recognition found there was no ‘explicit legal basis’ for the force’s use of the technology and raised concerns that it may have breached human rights law. In August of the following year, the UK’s Court of Appeal ruled that South Wales Police’s use of facial recognition technology breached privacy rights and broke equality laws.”

Wilding notes other companies similarly promised to renounce facial recognition technology in 2020, including Amazon and Microsoft. Will governments also be able to entice them into breaking their vows with tantalizing offers?

Cynthia Murrell, September 18, 2023

Microsoft: Good Enough Just Is Not

September 18, 2023

Was it the Russian hackers? What about the special Chinese department of bad actors? Was it independent criminals eager to impose ransomware on hapless business customers?

No. No. And no.

9 4 finger pointing

The manager points his finger at the intern working the graveyard shift and says, “You did this. You are probably worse than those 1,000 Russian hackers orchestrated by the FSB to attack our beloved software. You are a loser.” The intern is embarrassed. Thanks, Mom MJ. You have the hands almost correct… after nine months or so. Gradient descent is your middle name.

Microsoft Admits Slim Staff and Broken Automation Contributed to Azure Outage” presents an interesting interpretation of another Azure misstep. The report asserts:

Microsoft’s preliminary analysis of an incident that took out its Australia East cloud region last week – and which appears also to have caused trouble for Oracle – attributes the incident in part to insufficient staff numbers on site, slowing recovery efforts.

But not really. The report adds:

The software colossus has blamed the incident on “a utility power sag [that] tripped a subset of the cooling units offline in one datacenter, within one of the Availability Zones.”

Ah, ha. Is the finger of blame like a heat seeking missile. By golly, it will find something like a hair dryer, fireworks at a wedding where such events are customary, or a passenger aircraft. A great high-tech manager will say, “Oops. Not our fault.”

The Register’s write up points out:

But the document [an official explanation of the misstep] also notes that Microsoft had just three of its own people on site on the night of the outage, and admits that was too few.

Yeah. Work from home? Vacay time? Managerial efficiency planning? Whatever.

My view of this unhappy event is:

  1. Poor managers making bad decisions
  2. A drive for efficiency instead of a drive toward excellence
  3. A Microsoft Bob moment.

More exciting Azure events in the future? Probably. More finger pointing? It is a management method, is it not?

Stephen E Arnold, September 18, 2023

Turn Left at Ethicsville and Go Directly to Immoraland, a New Theme Park

September 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Stanford University lost a true icon of scholarship. Why is this individual leaving the august institution, a hot spot of modern ethical and moral discourse. Yeah, the leader apparently confused real and verifiable data with less real and tough-to-verify data. Across the country, an ethics professor no less is on leave or parked in an academic rest area over a similar allegation. I will not dwell on the outstanding concept of just using synthetic data to inform decision models, a practice once held in esteem at the Stanford Artificial Intelligence Lab.

9 9 audience reacts in horror

“Gasp,” one PhD utters. An audience of scholars reveals shock and maybe horror when a colleague explains that making up, recycling, or discarding data at odds with the “real” data is perfectly reasonable. The brass ring of tenure and maybe a prestigious award for research justify a more hippy dippy approach to accuracy. And what about grants? Absolutely. Money allows top-quality research to be done by graduate assistants. Everyone needs someone to blame. MidJourney, keep on slidin’ down that gradient descent, please.

Scientist Shocks Peers by Tailoring Climate Study” provides more color for these no-ethics actions by leaders of impressionable youth. I noted this passage:

While supporters applauded Patrick T. Brown for flagging what he called a one-sided climate “narrative” in academic publishing, his move surprised at least one of his co-authors—and angered the editors of leading journal Nature. “I left out the full truth to get my climate change paper published,” read the headline to an article signed by Brown…

Ah, the greater good logic.

The write up continued:

A number of tweets applauded Brown for his “bravery”, “openness” and “transparency”. Others said his move raised ethical questions.

The write up raised just one question I would like answered: “Where has education gone?” Answer: Immoraland, a theme park with installations at Stanford and Harvard with more planned.

Stephen E Arnold, September 14, 2023

What Is More Important? Access to Information or Money

September 14, 2023

Laws that regulate technology can be outdated because they were written before the technology was invented. While that is true, politicians have updated laws to address situations that arise from advancing technology. Artificial intelligence is causing a flurry of new legislative concerns. The Conversation explains that there are already laws regulating AI on the books but they are not being follow: “Do We Need A New Law For AI? Sure-But First We Could Try Enforcing The Laws We Already Have.”

In the early days of the Internet and mass implementation of computers, regulation was a bad work akin to censoring freedom of speech and would also impede technology progress. AI technology is changing that idea. Australian Minister for Industry and Science Ed Husic is leading the charge for an end to technology self-regulation that could inspire lawmakers in other countries.

Husic wants his policies to focus on high risk issues related to AI and balancing the relationship between humans and machines. He no longer wants the Internet and technology to be a lawless wild west. Big tech leaders such as OpenAI Chief Executive Sam Altman said regulating AI was essential. OpenAI developed the ChatGPT chatbot/AI assistant. Altman’s statement comes ten years after Facebook founder Mark Zuckerberg advised people in the tech industry to move fast and break things. Why are tech giants suddenly changing their tune?

One idea is that tech giants understand the dangers associated with unbridled AI. They realize without proper regulation, AI’s negative consequences could outweigh the positives.

There are already AI regulating laws in most countries but it refers to technology in general:

“Our current laws make clear that no matter what form of technology is used, you cannot engage in deceptive or negligent behavior.

Say you advise people on choosing the best health insurance policy, for example. It doesn’t matter whether you base your advice on an abacus or the most sophisticated form of AI, it’s equally unlawful to take secret commissions or provide negligent advice.”

The article was written by tech leaders at the Human Technology Institute located at the University of Technology Sydney, who are calling for Australia to create a new government role, the AI Commissioner. This new role would be an independent expert advisor to the private and government sector to advise businesses and lawmakers on how to use and enforce AI within Australia’s laws. Compared to North America, the European Union, and many Asian countries, Australia has dragged its heels developing AI laws.

The authors stress that personal privacy must be protected like the laws that already exist in Europe. Also they cite examples of how mass-automation of tasks led to discrimination and bureaucratic nightmares.

An AI Commissioner is a brilliant idea but it places the responsibility on one person. A small, regulating board monitored like other government bodies would be a better idea. Since the idea is logical the Australian government will fail to implement it. That is not a dig on Australia. Any and all governments fail at implementing logical plans.

Whitney Grace, September 14, 2023

YouTube and Click Fraud: A Warning Light Flashing?

September 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I spotted a link to a 16 minute YouTube long form, old-fashioned video from Lon.TV titled YouTube Invalid Traffic. The person who does Lon.TV usually reviews gadgets, but this video identifies a demonetization procedure apparently used by the non-monopoly Google. (Of course, I believe Google’s assertion that almost everyone uses Google because it is just better.)

9 13 bogus explanation

The creator reads an explanation of an administrative action and says, “What does this mean?” Would a non-monopoly provide a non explanation? Probably a non not. Thanks, MidJourney, the quality continues to slip. Great work.

Lon.TV explains that the channel received a notice of fraudulent clicks. The “fix”, which YouTube seems to implement unilaterally and without warning, decreases a YouTuber’s income. The normal Google “help” process results in words which do not explain the details of the Google-identified problem.

Click fraud has been a tricky issue for ad-supported Google for many years. About a decade ago, a conference organizer wanted me to do a talk about click fraud, a topic I did not address in my three Google monographs. The reports for a commercial company footing the bill for my research did get information about click fraud. My attorney at the time (may he rest in peace) advised me to omit that information from the monographs published by a now defunct publisher in the UK. I am no legal eagle, but I do listen to them, particularly when it costs me several hundred dollars an hour.

Click fraud is pretty simple. One can have a human click on a link.l If one is serious, one can enlist a bunch of humans using an ad in Craigslist.com. A more enterprising click fraud player would write a script and blast through a target’s ad budget, rack up lots of popularity points, or make a so-so video into the hottest sauce pan on the camp fire.

Lon.TV’s point is that most of his site’s traffic originates from Google searches. A person looking for a camera review runs a query on Google. The Google results point to a Lon.TV video. The person clicks on the Google generated link, and the video plays. The non-monopoly explains, as I understand it, that the fraudulent clicks are the fault of the YouTuber. So, the bad actor is the gadget guy at Lon.TV.

I think there is some useful information or signals in this video. I shall share my observations:

  1. Click fraud, based on my research a decade ago, was indeed a problem for the non-monopoly. In fact, the estimable company was trying to figure out how to identify fraudulent clicks and block them. The work was not a path to glory, so turnover often plagued those charged with stamping out click fraud. Plus, the problem was “hard.” Simple fixes like identifying lots of clicks in a short time were easily circumvented. More sophisticated ones like figuring out blocks of IP addresses responsible for lots of time spaced clicks were okay until the fraudsters figured out another approach. Thus, cat-and-mouse games began.
  2. The entire point of YouTube.com is to attract traffic. Therefore, it is important to recognize what is a valid new trend like videos of females wearing transparent clothing is recognized and clicks on dull stuff like streaming videos of a view of an erupting volcano are less magnetic. With more clicks, many algorithmic beavers jump in the river. More clicks means more ads pushed. The more ads pushed means more clicks on those ads and, hence, more money. It does not take much thought to figure out that a tension exists between lots of clicks Googlers and block those clicks Googlers. In short, progress is slow and money generation wins.
  3. TikTok has caused Google to undermine its long form videos to deal with the threat of the China-linked competitor. The result has been an erosion of clicks because one cannot inject as many ads into short videos as big boy videos. Oh, oh. Revenue gradient decline. Bad. Quick fix. Legitimize keeping more ad revenue? Would a non monopoly do that?
  4. The signals emitted by Lon.TV indicate that Google’s policy identified by the gadget guy is to blame the creator. Like many of Google’s psycho-cognitive methods used to shift blame, the hapless creator is punished for the alleged false clicks. The tactic works well because what’s the creator supposed to do? Explain the problem in a video which is not pushed?

Net net: Click fraud is a perfect cover to demonetized certain videos. What happens to the ad money? Does Google return it to the advertiser? Does Google keep it? Does Google credit the money back to the advertiser’s account and add a modest “handling fee”? I don’t know, and I am pretty sure the Lon.TV fellow does not either. Furthermore, I am not sure Google “knows” what its different units are doing about click fraud. What’s a non-monopoly supposed to do? I think the answer is, “Make money.” More of these methods are likely to surface in the future.

Stephen E Arnold, September 13, 2023

Apple and Microsoft: Gatekeeping Is Not for Us. We Are Too Small. That Is Correct. Small.

September 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Apple and Microsoft Say Flagship Services Not Popular Enough to Be Gatekeepers.” Pretty amazing. Apple wanted to be a gatekeeper and mobile phone image cop and Microsoft Edge Bing thing routinely polices what its smart software outputs.

9 4 popular couple

The American high school homecoming king and queen, both members of the science club, insist they are not popular. How, one may ask, did you get elected king and queen. The beaming royals said, “We are just small. You know, little itty bitty things. Do you like our outfits?” Thanks, MidJourney. Stay true to the gradient descent thing, please.

Both outfits have draconian procedures to prevent a person from doing much of anything unless one of the den mothers working for these companies gives a nod of approval.

The weird orange newspaper states:

Apple and Microsoft, the most valuable companies in the US, have argued some of their flagship services are insufficiently popular to be designated “gatekeepers” under landmark new EU legislation designed to curb the power of Big Tech. Brussels’ battle with Apple over its iMessage chat app and Microsoft’s search engine Bing comes ahead of Wednesday’s [September 6, 2023] publication of the first list of services that will be regulated by the Digital Markets Act.

The idea is a bit deeper in my opinion. Obviously neither of these outfits wants to pay fines; both want to collect money. But the real point is that this “aw, shucks” attitude is one facet of US high tech outfits’ ability to anger regulators in other countries. I have heard the words “arrogant,” “selfish,” “greedy,” and worse used to describe the smiling acolytes who represent these two firms in their different legal battles in Europe.

I want to look at this somewhat short-sighted effort by Apple and Microsoft from a different point of view. Google, in my opinion, is likely become the gatekeeper, the enforcer, the toll road collector, and the arbiter of framing “truth.” Why? Google is ready, willing, and able to fill the void.

One would assume that Apple and Microsoft would have a sit down with the Zuckbook to discuss the growing desire for content control and dissemination. Nope. The companies are sufficiently involved in their own alleged monopolistic ideas to think about a world in which Google becomes the decider.

Some countries view the US and its techno-business policies and procedures with some skepticism. What happens if the skepticism morphs into another notion? Will Teams and iPhones be enough to make these folks happy?

Stephen E Arnold, September 13, 2023

AI: Juicing Change

September 13, 2023

Do we need to worry about how generative AI will change the world? Yes, but no more than we had to fear automation, the printing press, horseless carriages, and the Internet. The current technology revolution is analogous to the Industrial Revolutions and technology advancements of past centuries. University of Chicago history professor Ada Palmer is aware of humanity’s cyclical relationship with technology and she discusses it in her Microsoft Unlocked piece: “We Are An Information Revolution Species.”

Palmer explains that the human species has been living in an information revolution for twenty generations. She provides historical examples and how people bemoan changes. The changes arguably remove the “art” from tasks. These tasks, however, are simplified and allow humans to create more. It also frees up humanity’s time to conquer harder problems. Changes in technology spur a democratization of information. They also mean that jobs change, so humans need to adapt their skills for continual survival.

Palmer says that AI is just another tool as humanity progresses. She asserts that the bigger problems are outdated systems that no longer serve the current society. While technology has evolved so has humanity:

“This revolution will be faster, but we have something the Gutenberg generations lacked: we understand social safety nets. We know we need them, how to make them. We have centuries of examples of how to handle information revolutions well or badly. We know the cup is already leaking, the actor and the artist already struggling as the megacorp grows rich. Policy is everything. We know we can do this well or badly. The only sure road to real life dystopia is if we convince ourselves dystopia is unavoidable, and fail to try for something better.”

AI does need a social safety net so it does not transform into a sentient computer hellbent on world domination. Palmer should point out that humans learn from their imaginations too. Star Trek or 2001: A Space Odyssey anyone? Nah, too difficult. Just generate content and sell ads.

Whitney Grace, September 13, 2023

New Wave Management or Is It Leaderment?

September 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Here’s one of my biases, and I am rather proud of it. I like to word “manager.” According to my linguistics professor Lev Soudek, the word “manage” used to mean trickery and deceit. When I was working at a blue chip consulting firm, the word meant using tactics to achieve a goal. I think of management as applied trickery. The people whom one pays will go along with the program, but not 24×7. In a company which expects 60 hours of work a week the minimum for survival of a Spanish inquisition inspired personnel approach, mental effort had to be expended.

I read “I’m a Senior Leader at Amazon and Have Seen Many Bad Managers. Here Are 3 Reasons Why There Are So Few Great Ones.” The intense, clear-eyed young person explains that he has worked at some outfits which are not among my list of the Top 10 high-technology outfits. His résumé includes eBay (a digital yard sale), a game retailer, and the somewhat capricious Amazon (are we a retail outfit, are we a cloud outfit, are we a government services company, are we a data broker, are we a streaming company, etc.).

9 3 leader

A modern practitioner of leaderment is having trouble getting the employees to fall in, throw their shoulders back, and mark in step to the cadence of Am-a-zon, Am-a-zon like a squad of French Foreign Legion troops on Bastille Day. Thanks, MidJourney. The illustration did not warrant a red alert, but it is also disappointing.

I assume that these credentials are sufficient to qualify for a management guru. Here are the three reasons managers are less than outstanding.

First, managers just sort of happen. Few people decide to be a manager. Ah, serendipity or just luck.

Second, managers don’t lead. (Huh, the word is “management”, not “leaderment.”)

Third, pressure for results means some managers are “sacrificing employee growth.” (I am not sure what this statement means. If one does not achieve results, then that individual and maybe his direct reports, the staff he leaderments, and his boss will be given an opportunity to find their future elsewhere. Translation for the GenZ reader: You are fired.

Let’s step back and think about these insights. My initial reaction is that a significant re-languaging has taken place in the write up. A good manager does not have to be a leader. In fact, when I was a guest lecturer at the Kansai Institute of Technology, I met a number of respected Japanese managers. I suppose some were leaders, but a number made it clear that results were number one or ichiban.

In my work career, confusing to manage with to lead would create some confusion. I recall when I was working in the US Congress with a retired admiral who was elected to represent an upscale LA district, the way life worked was simple: The retired admiral issued orders. Lesser entities like myself figured out how to execute, tapped appropriate resources, and got the job done. There was not much leadership required of me. I organized; I paid people money; and I hassled everyone until the retired admiral grunted in a happy way. There was no leaderment for me. The retired admiral said, “I want this in two days.” There was not much time for leaderment.

I listened to a podcast called GeekWire. The September 2, 2023, program made it clear that the current big dog at Amazon wants people to work in the office. If not, these folks are going to go away. What makes this interesting is that the GeekWire pundits pointed out that the Big Dog had changed his story, guidelines, and procedures for this work from home and work from office approach multiple times.

Therefore, I am not sure if there is management or leaderment at the world’s largest digital mall. I do know that modern leaderment is not for me. The old-fashioned meaning of manage seems okay to me.

Stephen E Arnold, September 12, 2023

Will the Cloud Energize Google or Just Generate Marketing Material?

September 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an article in Forbes (once the capitalist tool and now a tool for capitalists I think) titled “How Google Cloud Is Leveraging Generative AI To Outsmart Competition.” The competition? Does this mean AI entities in China, quasi-monopolies like Facebook (aka Meta) and Microsoft, or tiny start ups with piles of venture funding?

9 4 content marketing payoff

A decider in the publishing sector learns how to make it rain money. Is the method similar to that of the era of Yellow Journalism? Nope. The approach is squarely in line with Madison Avenue’s traditional approach. Thanks, Mother MidJourney. No red alert. Try to scramble up the gradient descent today, please.

The article’s title signals content marketing to me. As I read through the essay, it struck me as product placement.

Let me cite a couple of examples:

First, consider this passage:

Compared to Cloud TPU v4, the new Google Cloud TPU v5e has up to 2x higher training performance per dollar and up to 2.5x higher inference performance per dollar for LLMs and generative AI models. … Google is introducing Multislice technology in preview to make it easier to scale up training jobs, allowing users to quickly scale AI models beyond the boundaries of physical TPU pods—up to tens of thousands of Cloud TPU v5e or TPU v4 chips.

The “information” seems to come from a technical source proud of the advanced developments at the beloved Google. I would suggest that the information payload of the passage is zero for a person working in a Fortune 1000 company engaged in retail or financial services. In my opinion, the information is not even useful for marketing. Forbes is writing for the people not in the Google AI parade.

What about this passage?

Having its own foundation models enables Google to iterate faster based on usage patterns and customer feedback. Since the announcement of PaLM2 at Google I/O in April 2023, the company has enhanced the foundation model to support 32,000 token context windows and 38 new languages. Similarly, Codey, the foundation model for code completion, offers up to a 25% quality improvement in major supported languages for code generation and code chat. The primary benefit of owning the foundation model is the ability to customize it for specific industries and use cases.

Let’s set aside the tokens thing and the assertion about “25 percent quality improvement” and get to the point: “The primary benefit of owning the foundation model is the ability to customize it for specific industries and use cases.” To me, I think that Google wants control: The foundation, the tools for building, and the use cases. Since these are software, Google benefits because it furthers its alleged monopoly grip on information. Furthermore, Google as a super user can easily inject for fee, weaponized, or shaped content into the workflows to achieve its objective: Money. I suppose some of the people in the parade will get a payoff like a drink of Google-Ade. But the winner is Google.

My view of this “real” news write up is a recycling of comments I have offered in my essays since the days of Backrub:

  • Google’s technology is designed to allow control of information
  • The methods are those of other alleged monopolies: Control and distribution to generate money and toll booths
  • The executives are unable to break out of the high school science club bubble in which they think, explain, and operate.

I wonder if Malcolm Forbes would be happy with this “real” news about Google, the number three cloud provider making a play to mash up infrastructure, information processing, and monetization in an objective news story?

My hunch is that he would want to ride his Harley up Broadway to get away from those who have confused product placement with hard reporting.

Stephen E Arnold, September 12, 2023

AI and the Legal Eagles

September 11, 2023

Lawyers and other legal professionals know that AI algorithms, NLP, machine learning, and robotic process automation can leverage their practices. They will increase their profits, process cases faster, and increase efficiency. The possibilities for AI in legal practice appear to be win-win situation, ReadWrite discusses how different AI processes can assist law firms and the hurdles for implementation in: “Artificial Intelligence In Legal Practice: A Comprehensive Guide.”

AI will benefit law firms in streamlining research and analytics processes. Machine learning and NLP can consume large datasets faster and more efficiently than humans. Contract management and review processes will greatly be improved, because AI offers more comprehensive analysis, detects discrepancies, and decreases repetitive tasks.

AI will also lighten legal firms workloads with document automation and case management. Legal documents, such as leases, deeds, wills, loan agreements, etc., will decrease errors and reduce review time. AI will lowers costs for due diligence procedures and e-discovery through automation and data analytics. These will benefit clients who want speedy results and low legal bills.

Law firms will benefit the most from NLP applications, predictive analytics, machine learning algorithms, and robotic process automation. Virtual assistants and chatbots also have their place in law firms as customer service representatives.

Despite all the potential improvements from AI, legal professionals need to adhere to data privacy and security procedures. They must also develop technology management plans that include, authentication protocols, backups, and identity management strategies. AI biases, such as diversity and sexism issues, must be evaluated and avoided in legal practices. Transparency and ethical concerns must also be addressed to be compliant with governmental regulations.

The biggest barriers, however, will be overcoming reluctant staff, costs, anticipating ROI, and compliancy with privacy and other regulations.

“With a shift from viewing AI as an expenditure to a strategic advantage across cutting-edge legal firm practices, embracing the power of artificial intelligence demonstrates significant potential for intense transformation within the industry itself.”

These challenges are not any different from past technology implementations, except AI could make lawyers more reliant on technology than their own knowledge. Cue the Jaws theme music.

Whitney Grace, September 11, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta