The Alleged Apple M1 Vulnerability: Just Like Microsoft?

June 15, 2022

I read “MIT Researchers Uncover Unpatchable Flaw in Apple M1 Chips.” I have no idea if the exploit is one that can be migrated to a Dark Web or Telegram Crime as a Service pitch. Let’s assume that there may be some truth to the clever MIT wizards’ discoveries.

First, note this statement from the cited article:

The researchers — which presented their findings to Apple — noted that the Pacman attack isn’t a “magic bypass” for all security on the M1 chip, and can only take an existing bug that pointer authentication protects against.

And this:

In May last year, a developer discovered an unfixable flaw in Apple’s M1 chip that creates a covert channel that two or more already-installed malicious apps could use to transmit information to each other. But the bug was ultimately deemed “harmless” as malware can’t use it to steal or interfere with data that’s on a Mac.

I may be somewhat jaded, but if these statements are accurate, the “unpatchable” adjective is a slide of today’s reality. Windows Defender may not defend. SolarWinds’ may burn with unexpected vigor. Cyber security software may be more compelling in a PowerPoint deck than installed on a licensee’s system wherever it resides.

The key point is that like many functions in modern life, there is no easy fix. Human error? Indifference? Clueless quality assurance and testing processes?

My hunch is that this is a culmination of the attitude of “good enough” and “close enough for horseshoes.”

One certainty: Bad actors are encouraged by assuming that whatever is produced by big outfits will have flaws, backdoors, loopholes, stupid mistakes, and other inducements to break laws.

Perhaps it is time for a rethink?

Stephen E Arnold, June 15, 2022

Alphabet Google and the Caste Bias Cook Out

June 3, 2022

The headline in the Bezosish Washington Post caught my attention. Here it is: “Google’s Plan to Talk about Caste Bias Led to Division and Rancor.” First off, I had zero idea what caste bias means, connotes, denotes, whatever.

Why not check with the Delphic Oracle of Advertising aka Google? The Alphabet search system provides this page of results to the query “caste bias”:

image

Look no ads. Gee, I wonder why? Okay, not particularly helpful info.

I tried the query “caste bias Google” on Mr. Pichai’s answer machine and received this result:

image

Again no ads? What? Why? How?

Are there no airlines advertising flights to a premier vacation destination? What about hotels located in sunny Mumbai? No car rental agencies? (Yeah, renting a car in Delhi is probably not a good idea for someone from Tulsa, Oklahoma.) And the references to “casteist” baffled me. (I would have spelled casteist as castist, but what do I know?)

Let’s try Swisscows.com “caste bias Google”:

image

Nice results, but I still have zero idea about caste bias.

I knew about the International Dalit Solidarity Network. I navigated the IDSN site. Now we’re cooking with street trash and tree branches in the gutter next to a sidewalk where some unfortunate people sleep in Bengaluru:

image

“Caste discrimination” means if one is born to a high caste, that caste rank is inherited. If one is born to a low caste, well, someone has to sweep the train stations and clean the facilities, right? (I am paraphrasing, thank you.)

Now back to the Bezoish article cited above. I can now put this passage in the context of Discrimination World, an employment theme park, in my opinion:

Soundararajan [born low caste] appealed directly to Google CEO Sundar Pichai, who comes from an upper-caste family in India, to allow her presentation to go forward. But the talk was canceled, leading some employees to conclude that Google was willfully ignoring caste bias. Tanuja Gupta, a senior manager at Google News who invited Soundararajan to speak, resigned over the incident, according to a copy of her goodbye email posted internally Wednesday [June 1, 2022] and viewed by The Washington Post. India’s engineers have thrived in Silicon Valley. So has its caste system. [Emphasis added.]

Does this strike you as slightly anti” Land of the Free and Home of the Brave””?  The article makes it pretty clear that a low caste person appealing to a high caste person for permission to speak. That permission was denied. No revealing attire at Discrimination World. Then another person who judging by that entity’s name might be Indian, quits in protest.

Then the killer: Google hires Indian professionals and those professionals find themselves working in a version of India’s own Discrimination World theme park. And, it seems, that theme park has rules. Remember when Disney opened a theme park in France and would not serve wine? Yeah, that cultural export thing works really well. But Disney’s management wizards relented. Alphabet is spelling out confusion in my opinion.

Putting this in the context of Google’s approach to regulating what one can say and not say about Snorkel wearing smart software people, the company has a knack for sending signals about equality. Googlers are not sitting around the digital camp fire singing Joan Baez’s Kumbaya.

Googlers send signals about caste behavior described by the International Dalit Solidarity Network this way:

Untouchables’ – known in South Asia as Dalits – are often forcibly assigned the most dirty, menial and hazardous jobs, [emphasis added] and many are subjected to forced and bonded labour. Due to exclusion practiced by both state and non-state actors, they have limited access to resources, services and development, keeping most Dalits in severe poverty. They are often de facto excluded from decision making and meaningful participation in public and civil life.

Several observations:

  1. Is the alleged caste behavior crashing into some of the precepts of life in the US?
  2. Is Google’s management reacting like a cow stunned by a slaughter house’s captive bolt pistols?
  3. Should the bias allegations raised by Dr. Timnit Gebru be viewed in the context of management behaviors AND algorithmic functions focused on speed and efficiency for ad-related purposes be revisited? (Maybe academics without financial ties to Google, experts from the Netherlands, and maybe a couple of European Union lawyers? US regulators and Congressional representatives would be able to review the findings after the data are gathered?)
  4. In the alleged Google caste system, where do engineers from certain schools rank? What about females from “good” schools versus females from “less good” schools? What about other criteria designed to separate the herd into tidy buckets? None of this 60 percent threshold methodology. Let’s have nice tidy buckets, shall we? No Drs. Gebru and Mitchell gnawing at Dr. Jeff Dean’s snorkeling outfit.

I wonder what will be roasted in the Googley fire pit in celebration of Father’s Day? Goat pete and makka rotis? Zero sacred cow burgers.

Stephen E Arnold, June 3, 2022

Make Sales, Bill Time: Is There More to Real Work?

May 20, 2022

A partial answer to this question can be found in “Many Software Companies Are a Joke.” I circled this in bright green (that’s the money paid for not-too-helpful outputs):

The sad thing is that you get used to being busy but not productive, and when I say busy I mean pretending to be working hard when being watched. In other words, you will master the art of “eye service”.

Can one detect signals about “busy but not productive” in sectors other than software development? Let’s give this a whirl.

  1. Microsoft Teams and its monitoring functions and the parallel development of software that spoofs such monitoring.
  2. Meetings (in person and virtual) about inconsequential details when core functions do not meet customer needs.
  3. Decisions to replace informed humans with chatbots so employees do not have to deal with customers who complain about incorrect orders, non-functioning components, or bill mistakes.

I do like the idea of perfecting “eye service.” Perhaps this can be converted into a for-fee training program called “How to Look Busy While Doom scrolling”?

If one does no work, then one is not responsible for problems.

Stephen E Arnold, May 20, 2022

Differences Between Data Science And Business Intelligence

May 17, 2022

Data science is an encompassing term that is hard to define. Data science is an umbrella field that splinters in many directions. The Smart Data Collective explains the difference between two types of data science in, “The Difference Between Business Intelligence And Real Data Science.” According to the article, real data science is combining old and new data, analyzing it, and applying it to current business practices. Business intelligence (BI) focuses more on applications, such as creating charts, graphs, and reports.

Companies are interested in employing real data science and business intelligence, but it is confusing to distinguish the two. Data scientists and BI analysts are different jobs with specialized expertise. Data scientists are experts in predicting future outcomes by styling various models and discovering correlations. BI analysts know how to generate dashboards for historic data based on a set of key performance metrics.

Data scientists’ role is not based on guesswork. They are required to be experts in predictive and prescriptive analyses. Their outcomes need to be reasonably accurate for businesses’ success. BI needs advanced planning to combine data sources into useful content, data science, meanwhile, can be done instantly.

There are downsides to both:

“As you cannot get the data transformation done instantly with BI, it is a slow manual process involving plenty of pre-planning and comparisons. It needs to be repeated monthly, quarterly or annually and it is thus not reusable. Yet, the real data science process involves creating instant data transformations via predictive apps that trigger future predictions based on certain data combinations. This is clearly a fast process, involving a lot of experimentation.”

Business intelligence and real data science are handy for any business. Understanding the difference is key to utilizing them.

Whitney Grace, May 17, 2022

Google: Dark Patterns? Nope Maybe Clumsy Patterns?

May 5, 2022

Ah, the Google. Each day more interesting information about the business processes brightens my day. I just read a post by vort3 called “Google’s Most Ridiculous Trick to Force Users into Adding Phone Number.” The interesting segment of the post is the list of “things that are wrong” caught my attention. Here are several of the items:

You can’t generate app specific passwords if you don’t have 2FA enabled. That’s some artificial limitation made to force you into adding phone number to your account.

You can’t use authenticator app to enable 2FA. I have no idea why SMS which is the least secure way to send information is a primary method and authenticator app which can be set up by scanning QR from the screen without sending any information at all is «secondary» and can only be used after you give your phone number.

Nowhere in announcements or help pages or in the Google Account interface they tell you that you can’t generate app passwords if you don’t have 2FA. The button is just missing and you wouldn’t even know it should be there unless you search on the internet.

Nowhere they tell you the only way to enable 2FA is to link your account to your phone number or to your android/iphone device, the options are just not there.

Vort3 appears to not too Googley. Others chime into Vort3’s post. Some of the comments are quite negative; for example, JQPABC123 said:

The fastest way to convince me *not* to use a product is to attach a “Google” label to it. Nothing Google has to offer justifies the drawbacks.

Definitely a professional who might struggle in a Google business process interview. By this I mean, asking “What process?” is a downer.

The fix, according to CraftyGuy is, “Stop… using Google.”

The Beyond Search team thinks the Google is the cat’s pajamas because these are not Dark Patterns, they seem to be clumsy.

Stephen E Arnold, May 5, 2022

NCC April McKinsey: More Controversy

April 27, 2022

The real news outfit AP (once Associated Press) published “Macron holds 1st big rally; Rivals stir up ‘McKinsey Affair’.” [If this link 404s, please, contact your local AP professional, not me.] The main point of the news story is that the entity name “McKinsey” is not the blue chip, money machine. Nope. McKinsey, in the French context of Covid and re-election, means allegations of about the use of American consultants. What adds some zip to the blue chip company’s name is its association by the French senate with allegedly improper tax practices. The venerable and litigious AP uses the word “dodging” in the article. Another point is that fees paid to consulting firms have risen. Now this is not news to anyone with some familiarity with the practices of blue chip consulting companies. For me, the key sentence in the AP’s article is this sentence:

…the [French senate] report says McKinsey hasn’t paid corporate profit taxes in France since at least 2011, but instead used a system of ‘tax optimization’ through its Delaware-based parent company.

That’s nifty. More than a decade. Impressive enforcement by the French tax authority. I suppose the good news is that the tax optimization method did not make use of banking facilities in the Cayman Islands. Perhaps McKinsey needs to hire lawyers and its own business advisors. First the opioid misstep in the US and now the French government.

Impressive.

Stephen E Arnold, April 27, 2022

Google: Struggles with Curation

April 21, 2022

Should Google outsource Play store content curation to Amazon’s Mechanical Turk or Fiverr?

Sadly, one cannot assume that because an app is available through Google Play it is safe. Engadget reports, “Google Pulls Apps that May Have Harvested Data from Millions of Android Devices.” Writer S. Dent reveals:

“Google has pulled dozens of apps used by millions of users after finding that they covertly harvested data, The Wall Street Journal has reported. Researchers found weather apps, highway radar apps, QR scanners, prayer apps and others containing code that could harvest a user’s precise location, email, phone numbers and more. It was made by Measurement Systems, a company that’s reportedly linked to a Virginia defense contractor that does cyber-intelligence and more for US national-security agencies. It has denied the allegations.”

Naturally. We find it interesting that, according to the report, the firm was after data mainly from the Middle East, Central and Eastern Europe and Asia. The write-up continues:

“The code was discovered by researchers Serge Egelman from UC Berkeley and the University of Calgary’s Joel Reardon, who disclosed their findings to federal regulators and Google. It can ‘without a doubt be described as malware,’ Egelman told the WSJ. Measurement Systems reportedly paid developers to add their software development kits (SDKs) to apps. The developers would not only be paid, but receive detailed information about their user base. The SDK was present on apps downloaded to at least 60 million mobile devices. One app developer said it was told that the code was collecting data on behalf of ISPs along with financial service and energy companies.”

So how did these apps slip through the vetting process? Maybe the app review methods are flawed, not applied rigorously, not applied consistently. Or perhaps they are simply a bit of PR hogwash? We don’t know but the question is intriguing. Google has removed the apps from the Play store but of course they still lurk on millions of devices. In its email to the Wall Street Journal, Measurement Systems not only insists its apps are innocent, but it also asserts it is “not aware” of any connection between it and US defense contractors.

And what about the quantumly supreme Google smart software?

Cynthia Murrell, April 21, 2022

Googley Fact-Checking Efforts

April 14, 2022

Perhaps feeling the pressure to do something about the spread of falsehoods online, “Google Rolls Out Fact-Checking Features to Help Spot Misinformation” on developing news stories, reports Silicon Republic. The company’s product manager Nidhi Hebbar highlighted several of these features in a recent blog post. One is the search platform’s new resource page that offers suggestions for evaluating information. Then there is a new label within Google Search that identifies stories frequently cited by real news outfits. We also learn about the company’s Fact Check Explorer, which answers user queries on various topics with fact checks from “reputable publishers.” We are told Google is also going out of its way to support fact-checkers. Writer Leigh McGowran explains:

“Google has also partnered with a number of fact-checking organisations globally to bolster efforts to deal with misinformation. This includes a collaboration with the International Fact Checking Network (IFCN) at the non-profit Poynter Institute. This partnership is designed to provide training and resources to fact checkers and industry experts around the world, and Google said the IFCN will create a new programme to help collaboration, support fact checkers against harassment and host training workshops. Google is also working with the collaborative network LatamChequea to train 500 new fact checkers in Argentina, Colombia, Mexico and Peru.”

The problem of misinformation online has only grown since it became a hot topic in the mid-teens. The write-up continues:

“Events such as the Covid-19 pandemic and the US Capitol riots in January 2021 flung online misinformation into the sphere of public debate, with many online platforms taking action on misleading or inaccurate info, whether posted deliberately or otherwise. Misinformation has come to the fore again with the Russian invasion of Ukraine, as people have reported seeing misleading, manipulated or false information about the conflict on social media platforms such as Facebook, Twitter, TikTok and Telegram.”

Will Google’s resources help stem the tide?

Cynthia Murrell, April 14, 2022

AI Helps Out Lawyers

April 11, 2022

Artificial intelligence algorithms have negatively affected as many industries as they have assisted. One of the industries that has benefitted from AI is law firms explains Medium in: “How Artificial Intelligence Is Helping Solve The Needs Of Small Law Practitioners.” In the past, small law firms were limited in the amount of cases they could handle. AI algorithms now allow small law practices to compete with the larger firms in all areas of laws. How is this possible?

“The latest revolution in legal research technology ‘puts a lawyer’s skill and expertise in the driver’s seat…’ New artificial intelligence tools give lawyers instant access to vast amounts of information and analysis online, but also the ability to turn that into actionable insights. They can be reminded to check specific precedents and the latest rulings, or be directed to examine where an argument might be incomplete. That leaves the lawyers themselves to do what only they can: think, reason, develop creative arguments and negotiation strategies, provide personal service, and respond to a client’s changing needs.”

Lawyers used to rely on printed reference materials from databases and professional publications. They were limited on the number of hours in a day, people, and access to the newest and best resources. That changed when computers entered the game and analytical insights were delivered from automated technology. As technology has advanced, lawyers can cross reference multiple resources and improve legal decision making.

While lawyers are benefitting from the new AI, if they do not keep up they are quickly left behind. Lawyers must be aware of current events, how their digital tools change, and how to keep advancing the algorithms so they can continue to practice. That is not much different from the past, except it is moving at a faster rate.

Whitney Grace, April 11, 2022

Why Be Like ClearView AI? Google Fabs Data the Way TSMC Makes Chips

April 8, 2022

Machine learning requires data. Lots of data. Datasets can set AI trainers back millions of dollars, and even that does not guarantee a collection free of problems like bias and privacy issues. Researchers at MIT have developed another way, at least when it comes to image identification. The World Economic Forum reports, “These AI Tools Are Teaching Themselves to Improve How they Classify Images.” Of course, one must start somewhere, so a generative model is first trained on some actual data. From there, it generates synthetic data that, we’re told, is almost indistinguishable from the real thing. Writer Adam Zewe cites the paper‘s lead author Ali Jahanian as he emphasizes:

“But generative models are even more useful because they learn how to transform the underlying data on which they are trained, he says. If the model is trained on images of cars, it can ‘imagine’ how a car would look in different situations — situations it did not see during training — and then output images that show the car in unique poses, colors, or sizes. Having multiple views of the same image is important for a technique called contrastive learning, where a machine-learning model is shown many unlabeled images to learn which pairs are similar or different. The researchers connected a pretrained generative model to a contrastive learning model in a way that allowed the two models to work together automatically. The contrastive learner could tell the generative model to produce different views of an object, and then learn to identify that object from multiple angles, Jahanian explains. ‘This was like connecting two building blocks. Because the generative model can give us different views of the same thing, it can help the contrastive method to learn better representations,’ he says.”

Ah, algorithmic teamwork. Another advantage of this method is the nearly infinite samples the model can generate, since more samples (usually) make for a better trained AI. Jahanian also notes once a generative model has created a repository of synthetic data, that resource can be posted online for others to use. The team also hopes to use their technique to generate corner cases, which often cannot be learned from real data sets and are especially troublesome when it comes to potentially dangerous uses like self-driving cars. If this hope is realized, it could be a huge boon.

This all sounds great, but what if—just a minor if—the model is off base? And, once this tech moves out of the laboratory, how would we know? The researchers acknowledge a couple other limitations. For one, their generative models occasionally reveal source data, which negates the privacy advantage. Furthermore, any biases in the limited datasets used for the initial training will be amplified unless the model is “properly audited.” It seems like transparency, which somehow remains elusive in commercial AI applications, would be crucial. Perhaps the researchers have an idea how to solve that riddle.

Funding for the project was supplied, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

Cynthia Murrell, April 8, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta