A Surprise: Newton Minnow Was Prescient

August 30, 2022

Social media is to blame for most misinformation spreading across the Internet faster than viral videos. Despite declining numbers, TV still plays a huge part in the polarization of the American populace. Ars Technica explains why: “It’s Just Not Social Media: Cable News Has Bigger Effect On Polarization.” While social media echo chambers exist, it is not at the huge scale we have been led to believe.

Researchers from Microsoft Researchers, Stanford University, and the University of Pennsylvania tracked TV consumption from thousands of American adults between 2016 to 2019. They discovered that selective news exposure did increase polarization, but it mostly came from TV. They found that 17% of American TV news watchers are politically polarized with a near-split average between left and right politics. That is three to four times higher than online news watchers.

TV watchers also do not change their viewing habits:

“Besides being more politically siloed on average, our research found that TV news consumers are much more likely than web consumers to maintain the same partisan news diets over time: after six months, left-leaning TV audiences are 10 times more likely to remain segregated than left-leaning online audiences, and right-leaning audiences are 4.5 times more likely than their online counterparts. While these figures may seem intimidating, it is important to keep in mind that even among TV viewers, about 70 percent of right-leaning viewers and about 80 percent of left-leaning viewers do switch their news diets within six months. To the extent that long-lasting echo chambers do exist, then, they include only about 4 percent of the population.”

Also depending on the TV viewers’ political leanings, they never stray too far from preferred news networks. The political imbalance is increasing among how audiences get their news, because more are shifting from broadcast news to cable.

This is not good, because it increases divisions among people rather than showing the commonalities everyone shares. It also makes news more sensational than it needs to be.

Whitney Grace, August 30, 2022

EU: Ahead of the US But Maybe Too Late Again

August 30, 2022

When making up for decades of inaction, just create more bureaucracy. That seems to be the approach behind the move revealed in Reuters‘ brief article, “EU Mulls New Unit with Antitrust Veterans to Enforce Tech Rules—Sources.” The European Commission seems to think it might be difficult to force tech giants to comply with the recently passed Digital Markets Act (DMA). Now where would they get that idea? The write-up tells us:

“The landmark rules, agreed in March, will go into force next year. They will bar the companies from setting their own products as preferences, forcing app developers to use their payment systems, and leveraging users’ data to push competing services. The new directorate at the Commission’s powerful antitrust arm may be headed by Alberto Bacchiega, director of information, communication and media, in charge of antitrust and merger cases involving the tech, media and consumer electronics industries, one of the people [familiar with the matter] said. Bacchiega could also be assisted by Thomas Kramler, head of the unit dealing with antitrust cases in e-commerce and data economy, and currently spearheading investigations into Apple and Amazon, the person said. Both officials are already liasing with those at the Commission’s Directorate-General for Communications Networks, Content and Technology which will jointly enforce the DMA, a third person said.”

Conveniently, both Bacchiega and Kramler were away on vacation and could not be reached for comment. A spokesperson stated the Commission is shuffling employees, assigning about 80 staff members to enforce the DMA. We wonder whether that is enough to counter Big Tech’s corporate resources, even with a pair of seasoned antitrust veterans at the helm.

Cynthia Murrell, August 29, 2022

Google Management: If True, a New Term Gains Currency

August 29, 2022

Caste bias. That’s a bound phrase with which I was not familiar. I grew up in Illinois, and when I was a wee lad in Illinois by the river gently flowing, castes and biases were not on my radar. Flash forward 77 years, and the concept remains outside the lingo of some people who live in Harrod’s Creek, Kentucky.

Google Scrapped a Talk on Caste Bias Because Some Employees Felt It Was “Anti Hindu”, if accurate, provides another glimpse of the Google’s difficult situation with regard to different ethnicities, religions and cults, and other factors which humanoids manifest.

The issue of management is a tricky one. Google, as I pointed out in The Google Legacy (Infonortics Ltd, 2004), Google is a company with non traditional management methods. These embraced settling an intellectual property misunderstanding with Yahoo related to advertising systems and methods, permitting a wide range of somewhat adolescent behaviors such as sleeping in bean bags and playing Foosball at work, and ignoring some of the more interesting behaviors super duper wizards demonstrate as part of their equipment for living.

The cited Quartz India article states:

“I cannot find the words to express just how traumatic and discriminatory Google’s actions were towards its employees and myself…” Soundararajan [the terminated speaker who is executive director of the US-based social justice organization Equality Labs] said in the press release.

The Google wizard charged with explaining the termination of the lecture allegedly said:

While noting that caste discrimination had “no place” at Google, Shannon Newberry, Google’s spokesperson, said in a statement to The Washington Post, “We also made the decision to not move forward with the proposed talk which—rather than bringing our community together and raising awareness—was creating division and rancor.”

Observations? I would like to offer three:

  1. Who in charge at the Google? Does this individual harbor some biases? My experience suggests that it is very difficult for an individual to step outside of the self and judge in an objective manner what behaviors could trigger such remarkable management decisions, explanations, and reversals.
  2. The lingo used to explain the incident strikes me as classic Sillycon Valley: A statement designed not to address the core issue.
  3. I wonder how Dr. Timnit Gebru interprets the management decision making for the allegedly true Quartz described incident.

Yep, just part of the Google Legacy. “Caste bias” plus accompanying Google babble in my opinion.

Stephen E Arnold, August 29, 2022

Google: Errors Are Not Possible… Mostly

August 29, 2022

In my upcoming talk for a US government law enforcement meeting, I talk about some of the issues associated with wonky smart software. I spotted a fantastic example of one quasi-alleged monopoly deals with tough questions about zippy technology.

As I understand “Google Refuses to Reinstate Man’s Account after He Took Medical Images of Son’s Groin,” an online ad company does not make errors… mostly. The article, which appeared in a UK newspaper, stated:

Google has refused to reinstate a man’s account after it wrongly flagged medical images he took of his son’s groin as child sexual abuse material…

The Alphabet Google YouTube DeepMind entity has sophisticated AI/ML (artificial intelligence/machine learning) systems which flag inappropriate content. Like most digital watch dogs, zeros and ones are flawless… mostly even though Google humans help out the excellent software. The article reports:

When the photos were automatically uploaded to the cloud, Google’s system identified them as CSAM. Two days later, Mark’s Gmail and other Google accounts, including Google Fi, which provides his phone service, were disabled over “harmful content” that was “a severe violation of the company’s policies and might be illegal”, the Times reported, citing a message on his phone. He later found out that Google had flagged another video he had on his phone and that the San Francisco police department opened an investigation into him. Mark was cleared of any criminal wrongdoing, but Google has said it will stand by its decision.

The cited article quotes a person from the US American Civil Liberty Union, offering this observation:

“These systems can cause real problems for people.”

Several observations:

  1. Google is confident its smart software works; thus, Google is correct in its position on this misunderstanding.
  2. The real journalists and the father who tried to respond to a medical doctor to assist his son are not Googley; that is, their response to the fabulous screening methods will not be able to get hired at the Alphabet Google YouTube Alphabet construct as full time employees or contractors.
  3. The online ad company and would be emulator or TikTok provides many helpful services. Those services allow the company to control information flows to help out everyone every single day.
  4. More color for this uplifting story can be found here.

Net net: Mother Google is correct… mostly. That’s why the Google timer is back online. Just click here. The company cares… mostly.

Stephen E Arnold, August 23, 2022

Copyright Trolls Await a Claim Paradise

August 29, 2022

Smart software can create content. In fact, the process can be automated, allow a semi-useless humanoid to provide a few inputs, and release a stream of synthetic content. (Remember, please, that these content outputs are weaponized to promote a specific idea, product, or belief.) Smart video tools will allow machines to create a video from a single image. If you are not familiar with this remarkable innovation in weaponized information, consider the import of Googley “transframing.” You can read about this contribution to society at this link.

I am not interested in exploring the technology of these systems. AI/ML (artificial intelligence and machine learning) stress my mental jargon filter. I want to focus on those unappreciated guardians of intellectual property: The entities and law firms enforcing assorted claims regarding images, text, and videos used without paying a royalty or getting legal permission to reuse an original creation.

The idea is simple: Smart software outputs a content object. The object is claimed by an organization eager to protect applicable copyright rules and regulations. The content object is marked with a © and maybe some paperwork will be filed. But why bother?

Now use some old fashioned hashing method to identify use of the content object, send a notice of © violation, demand payment, threaten legal action, and sit quietly like a “pigeon” in London for the cash to roll in.

A few people have a partial understanding of what the AI/ML generated content objects will create. For a glimpse of these insights, navigate to HackerNews and this threat; for example:

The future will include humans claiming AI art as their own, possibly touched up a bit, and AIs claiming human art as their own.

The legal eagles are drooling. And the trolls? Quivering with excitement. Paradise ahead!

Stephen E Arnold, August 29, 2022

Ommmmm, the Future

August 29, 2022

Everyone wants to predict the future, but no one and nothing can do that with 100% accuracy. When it comes to the future of technology and its relationships with humans, tech journalist Om Malik shared his thoughts: “The Future of Tech As I See It.” Malik discussed four points on the future and technology.

In the first, Malik explained he tried to find the inherent value in all technology. He believes people focus too much time trying to figure out what will be the next big tech boom to make a buck. Focusing too much on the “next big thing” distracts from the current use and value of technology. In other words, Malik wants people to concentrate more on the present. He could also try using TikTok.

M1 computer chips will give users more powerful computers equivalent to 25% of IBM’s Watson output. This will allow users to interact with computers in a manner different than anything we currently know. Malik states kids are being trained for a brand new world we can only conceptualize in the likes of the new Star Wars films, not the old sci-fi classics like 2001: a Space Odyssey.

Malik makes a good point that authenticating your identity will be how companies like Google and Facebook make their revenue in the future:

“What’s one thing you’ve barely noticed about living in the mobile phone world? How often do you “Login with Facebook” or “Login with Google” because it’s more convenient than setting up an account? There is a lot of value in whichever company makes authentication easy in this world.

What if Apple offers a Metamask-like product as an authentication system and in-exchange charges a small subscription fee? I would happily pay for the convenience alone. Authentication and payments can be critical to a post-app store world. Facebook, too, is hoping to ride the payments and authentication gravy train to the future.)”

The bigger question is how will technology authenticate people? Blood samples? DNA?

Malik ends on the point that the United States no longer shapes the entire world when it comes to technology. India, China, Africa, and Russia are bigger players than most western nations realize, but that is not new information. People who aren’t ostriches are aware of this.

Whitney Grace, August 29, 2022

OpenText: Goodwill Search

August 26, 2022

I spotted a short item in the weird orange newspaper called “Micro Focus Shares Jump After Takeover Bid from Canadian Rival.” (This short news item resides behind a paywall. Can’t locate it? Yeah, that’s a problem for some folks.)

What Micro Focus and Open Text are rivals? Interesting.

The key sentence is, in my opinion, ““OpenText agreed to buy its UK rival in an all-cash deal that values
the software developer at £5.1bn.”

Does Open Text have other search and retrieval properties? Yep.

Will Open Text become the big dog in enterprise search? Maybe. The persistent issue is the presence of Elasticsearch, which many developers of search based applications find to be better, faster, and chapter than many commercial offerings. (“Is BRS search user friendly and cheaper?”, ask I. The answer from my viewshed is ho ho ho.)

I want to pay attention going forward to this acquisition. I am curious about the answers to these questions:

  • How will the math work out? It was a cash deal and there is the cost of sales and support to evaluate.
  • Will the Micro Focus customers become really happy campers? It is possible there are some issues with the Micro Focus software.
  • How will Open Text support what appear to be competing options; for example, many of Open Text’s software systems strike me as duplicative. Perhaps centralizing technical development and providing an upgraded customer service solution using the company’s own software will reduce costs.

Notice I did not once mention Autonomy, Recommind, Fulcrum, or Tuxedo. (Reuters mentioned that Micro Focus was haunted by Autonomy’s ghost. Not me. No, no, no.)

Stephen E Arnold, August 26, 2022

The Home of Dinobabies Knows How to Eliminate AI Bias

August 26, 2022

It is common knowledge in tech and the news media that AI training datasets are flawed. These datasets are unfortunately prone to teaching AI how to be “racist” and “sexist.” AI are computer programs, so they are not intentionally biased. The datasets that teach them how to work are flawed, because they contain incorrect information about women and dark-skinned people. The solution is to build new datasets, but it is difficult to find hoards of large, unpolluted information. MIT News explains there is a possible solution in the article: “A Technique To Improve Both Fairness And Accuracy In Artificial Intelligence.”

Researchers already know that AI contain mistakes so they use selective regressions to estimate the confidence level for predictions. If the predictions are too low, then the AI rejects them. MIT researchers and MIT-IBM Watson AI Lab discovered what we already know: women and ethnic minorities are not accurately represented in the data even with selective regression. The MIT researchers designed two algorithms to fix the bias:

“One algorithm guarantees that the features the model uses to make predictions contain all information about the sensitive attributes in the dataset, such as race and sex, that is relevant to the target variable of interest. Sensitive attributes are features that may not be used for decisions, often due to laws or organizational policies. The second algorithm employs a calibration technique to ensure the model makes the same prediction for an input, regardless of whether any sensitive attributes are added to that input.”

The algorithms worked to reduce disparities in test cases.

It is too bad that datasets are biased, because it does not paint an accurate representation of people and researchers need to fix the disparities. It is even more unfortunate locating clean datasets and that the Internet cannot be used, because of all the junk created by trolls.

Whitney Grace, August 26, 2022

Data: A Disappointing Ride Down Zero Lane to Cell One

August 26, 2022

Projects meant to glean business insights through the analysis of vast troves of data still tend to disappoint. On its blog, British data-project management firm Brijj lists “5 Reasons Why 80% of Data and Insight Projects Fail.” The write-up tells us:

“In the UK alone, we spend £24bn on data projects every year. According to recent studies, however, organizational leadership has been dissatisfied with the value they get from data. In fact, they consider 80% of all data projects a failure. That equates to £19bn of waste. And why? Because so many don’t do the basics well. They never stood a chance.”

Not surprisingly, writer and Brijj founder/CEO Adrian Mitchell suggests consulting outside data experts from the start to make sure one’s project delivers those sweet, sweet insights:

“The bottom line is that both data creators and their business customers need to be involved in the data & insight project from the initial question through to the outcome and work closely together for it to provide actionable insights and urge action. Currently, there are many gaps between the two groups, resulting in disconnect, frustrations, time and financial losses, and no real-world outcomes. Organizations need to close these to truly harness the power of data and maximize its value.”

The list Mitchell offers looks awfully familiar; we think we have heard some of these “reasonsbefore. We are told the biggest problem is asking the wrong questions in the first place. Then there is, as mentioned above, a lack of collaboration between data analysts and their clients. If one has managed to gather useful bits of knowledge, they must be both communicated to the right people and made easy to find. Finally, standardized systems (like Brijj’s, we presume) should be put in place to make the whole process easier for the technically disinclined.

Perhaps Mitchell is right and these measures can help some companies make the most of the data they were persuaded to accumulate? It is worth keeping in mind, though, that any concepts derived by software have limitations… just like a blind data.

Cynthia Murrell, August 26, 2022

A Triller Thriller: Excitement I Do Not Need

August 26, 2022

Short-form video app Triller is eager to topple TikTok. When its rival was lambasted last summer for allowing white influencers to take credit for trends generated by Black content creators, Triller saw an opportunity. It immediately positioned itself as the platform that respects and elevates Black creators. It reached out to many of them with promises of regular monthly payments and coveted shares of stock while dangling visions of a content house, collaborations, and brand deals. However, whether from disorganization or disregard, The Washington Post reports, Triller is not holding up to its end of the deal. In the article, “A TikTok Rival Promised Millions to Black Creators. Now Some Are Deep in Debt” (paywalled), reporter Taylor Lorenz writes:

“[Dancer David Warren] was part of a group of what Triller touted as 300 Black content creators offered contracts totaling $14 million — ‘the largest ever one-time commitment of capital to Black creators,’ the company bragged in a November news release. But nearly a year after Triller began recruiting Black talent, its payments to many creators have been erratic — and, in some cases, nonexistent, according to interviews with more than two dozen creators, talent managers and former company staff, many of whom spoke to The Washington Post on the condition of anonymity to avoid retaliation from the company. For influencers, it’s a disastrous turn from a platform with a reputation for paying big money, dubbed ‘Triller money,’ to get talent to post on the app. Far from ‘Triller money,’ the Black influencers were promised

$4,000 per month, with half paid in equity, according to documents reviewed by The Post. Warren, used to making content for platforms controlled by other people, found the chance to own a piece of something thrilling. But now, as they cope with uncertain payments, many creators allege they are compelled to keep up with a demanding posting schedule and vague requirements that make it easy for the company to eliminate people from the program.”

Company executives flat-out deny allegations against them, but Lorenz shares her evidence in the article. She describes a toxic climate where administrators callously hold creators to the letter of their grueling agreements while failing to make good on tens of thousands of dollars in payments. In a spectacular display of gall, Triller informed creators it would prioritize keeping a certain amount on the books over its obligations to them as it prepares for its IPO. And those promised shares of stock that had creators feeling empowered? Nowhere to be seen. Whether it is a matter of contemptuous tokenization or mere incompetence, it seems Triller delivers little but a trail of broken promises.

Cynthia Murrell, August 26, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta