Seattle: Awareness Flickering… Maybe?

January 17, 2023

Generation Z is the first age of humans completely raised with social media. They are also growing up during a historic mental health crisis. Educators and medical professionals believe there is a link between the rising mental health crisis and social media. While studies are not 100% conclusive, there is a correlation between the two. The Seattle Times shares a story about how Seattle public schools think the same: “Seattle Schools Sues Social Media Firms Over Youth Mental Health Crisis.”

Seattle schools files a ninety-page lawsuit that asserts social media companies purposely designed, marketed, and operate their platforms for optimum engagement with kids so they can earn profits. The lawsuit claims that the companies cause mental and health disorders, such as depression, eating disorders, anxiety, and cyber bullying. Seattle Public Schools’ (SPS) lawsuit states the company violated the Washington public nuisance law and should be penalized.

SPS argues that due to the increased mental and physical health disorders, they have been forced to divert resources and spend funds on counselors, teacher training in mental health issues, and educating kids on dangers related to social media. SPS wants the tech companies to be held responsible and help treat the crisis:

“ ‘Our students — and young people everywhere — face unprecedented learning and life struggles that are amplified by the negative impacts of increased screen time, unfiltered content, and potentially addictive properties of social media,’ said SPS Superintendent Brent Jones in the release. ‘We are confident and hopeful that this lawsuit is the first step toward reversing this trend for our students, children throughout Washington state, and the entire country.’”

Tech insiders have reported that social media companies are aware of the dangers their platforms pose to kids, but are not too concerned. The tech companies argue they have tools to help adults limit kids’ screen time. Who is usually savvier with tech though, kids or adults?

The rising mental health crisis is also caused by two additional factors:

  1. Social media induces mass hysteria in kids, because it is literally a digital crowd. Humans are like sheep they follow crowds.
  2. Mental health diagnoses are more accurate, because the science has improved. More kids are being diagnosed because the experts know more.

Social media is only part of the problem. Tech companies, however, should be held accountable because they are knowingly contributing to the problem. And Seattle? Flicker, flicker candle of awareness.

Whitney Grace, January 17, 2023

Tech Needs: Programmers, Plumbing, and Prayers

January 17, 2023

A recent survey by open-source technology firm WSO2 asked 200 IT managers in Ireland and the UK about their challenges and concerns. BetaNews shares some of the results in, “IT Infrastructure Challenges Echo a Rapidly Changing Digital Landscape.” We learn of issues both short- and long-term. WSO2’s Ricardo Diniz describes the top three:

“The biggest IT challenge affecting decision-makers is ‘legacy infrastructure’. Fifty-five percent of those surveyed said it is a top challenge right now, although only 39 percent expect it to be a top challenge in three years’ time. This indicates a degree of confidence that legacy issues can be overcome, either through tools that integrate better with the legacy platforms, or the rollout of alternatives enabling legacy tech to be retired. Second on the list is ‘managing security risks’, cited by half of the respondents as a current problem, though only 41 percent expect to see it as an issue in the future. This is not surprising; given the headline-grabbing breaches and third-party risks facing organizations, resilience and protection are priorities. ‘Skills shortages in the IT team’ complete the top three challenges. It is an issue for 48 percent and is still expected to be a problem in three years’ time according to 39 percent of respondents. Notably, these three challenges are set to remain top of the list – albeit at a slightly less troublesome level – in three years’ time.”

A couple other challenges, however, seem on track to remain just as irksome in three years. One is businesses’ transition to the cloud, currently in progress. Most respondents, concerned about integrations with legacy systems and maximizing ROI, hesitate to move all their operations to the cloud and prefer a hybrid approach. Diniz recommends cloud vendors remain flexible.

The other stubborn issue is API integration and management. Though APIs are fundamental to IT infrastructure, Diniz writes, IT leaders seem unsure how to wield them effectively. As a company quite familiar with APIs, WSO2 has published some advice on the matter. Founded in 2005, WSO2 is based in Silicon Valley and maintains offices around the world.

Cynthia Murrell, January 17, 2023

Google and Its PR Response to the ChatGPT Buzz Noise

January 16, 2023

A crazy wave is sweeping through the technology datasphere. ChatGPT, OpenAI, Microsoft, Silicon Valley pundits, and educators are shaken, not stirred, into the next big thing. But where is the Google in this cyclone bomb of smart software? The craze is not for a list of documents matching a user’s query. People like students and spammers are eager for tools that can write, talk, draw pictures, and code. Yes, code more good enough software, by golly.

In this torrential outpouring of commentary, free demonstrations, and venture capitalists’ excitement, I want to ask a simple question: Where’s the Google? Well, to the Google haters, the GOOG is in panic mode. RED ALERT, RED ALERT.

From my point of view, the Google has been busy busy being Google. Its head of search Prabhakar Raghavan is in the spotlight because some believe he has missed the Google bus taking him to the future of search.  The idea is that Googzilla has been napping before heading to Vegas to follow the NCAA basketball tournament in incorrect. Google has been busy, just not in a podcast, talking heads, pundit tweeting way.

Let’s look at two examples of what Google has been up to since ChatGPT became the next big thing in a rather dismal economic environment.

The first is the appearance of a articles about the forward forward method for training smart software. You can read a reasonably good explanation in “What Is the “Forward-Forward” Algorithm, Geoffrey Hinton’s New AI Technique?” The idea is that some of the old-school approaches won’t work in today go-go world. Google, of course, has solved this problem. Did the forward forward thing catch the attention of the luminaries excited about ChatGPT? No. Why? Google is not too good at marketing in my opinion. ChatGPT is destined to be a mere footnote. Yep, a footnote, probably one in multiple Google papers like Understanding Diffusion Models: A Unified Perspective (August 2022). (Trust me. There are quite a few of these papers with comments about the flaws of ChatGPT-type software in the “closings” or “conclusions” to these Google papers.)

The second is the presentation of information about Google’s higher purpose. A good example of this is the recent interview with a Googler involved in the mysterious game-playing, protein-folding outfit called DeepMind. “DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution” does a good job of hitting the themes expressed in technical papers, YouTube video interviews, and breezy presentations at smart software conferences. This is a follow on to Google’s talking with an MIT researcher Lex Fridman about the Google engineer who thought the DeepMind system was a person and a two hour chat with the boss of DeepMind. The CEO video is at this link.

I want to highlight three points from this interview/article.

[A] Let’s look at this passage from the Time Magazine interview with the CEO of DeepMind:

Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.

I interpret this to mean that Google has better, faster, cheaper, and smarter NLP technology. Notice the idea of putting competitors in “mere footnotes.” This is an academic, semi-polite way to say, “Loser.”

[B] DeepMind alleged became a unit of Alphabet Google for this reason:

Google was “very happy to accept” DeepMind’s ethical red lines “as part of the acquisition.”

Forget the money. Think “ethical red lines.” Okay, that’s an interesting concept for a company which is in the data hoovering business, sells advertising, has a bureaucratic approach I heard described as described as slime mold, and is being sued for assorted allegations of monopolistic behavior in several countries.

[C] The Time Magazine article includes this statement:

Back at DeepMind’s spiral staircase, an employee explains that the DNA sculpture is designed to rotate, but today the motor is broken. Closer inspection shows some of the rungs of the helix are askew.

Interesting choice of words: “The motor is broken” and “askew.” Is this irony or just the way it is when engineering has to be good enough and advertising powers the buzzing nervous system of the company?

From my point of view, Google has been responding to ChatGPT with academic reminders that the online advertising outfit has a better mousetrap. My thought is that Google knew ChatGPT would be a big deal. That realization sparked the attempt by Google to answer questions with cards and weird little factoids related to the user’s query. The real beef or “wood behind” the program is the catchy forward forward campaign. How is that working out? I don’t have a Google T shirt that spells out Forward Forward. Have you seen one? My research suggests that Google wants to corner the market on low cost training data. Think Snorkel. Google pushes synthetic data because it is not real and, therefore, cannot be dragged into court over improper use of Web-accessible content. Google, I believe, wants to become the Trader Joe’s of off-the-shelf training data and ready-to-run smart software models. The idea has been implemented to some degree at Amazon’s AWS as I recall.

Furthermore, Google’s idea of a PR blitz is talking with an MIT researcher Lex Fridman. Mr. Fridman interviewed the the Google engineer (now a Xoogler) who thought the DeepMind system was a person and sort of alive. Mr. Fridman also spoke with the boss of DeepMind about smart software. (The video is at this link.) The themes are familiar: Great software, more behind the curtains, and doing good with Go and proteins.

Google faces several challenges with its PR effort to respond to ChatGPT:

  1. I am of the opinion that most people, even those involved in smart software, are not aware that Google has been running a PR and marketing campaign to make clear superiority of its system and method. No mere footnote for the Google. We do proteins. We snorkel. We forward forward. The problem is that ChatGPT is everywhere, and people like high school students are talking about it. Even artists are aware of smart software and instant image generation OpenAI style.
  2. Google remains ill equipped to respond to ChatGPT’s sudden thunder showers and wind storms of social buzz. Not even Google’s rise to fame matches what has happened to OpenAI and ChatGPT in the last few months. There are rumors that Microsoft will do more than provided Azure computing resources for ChatGPT. Microsoft may dump hard cash billions into OpenAI. Who is not excited to punch a button and have Microsoft Word write that report for you? I think high school students will embrace the idea; teachers and article writers at CNet, not so much.
  3. Retooling Google’s decades old systems and methods for the snappy ChatGPT approach will take time and money. Google has the money, but in the world of bomb cyclones the company may not have time. Technology fortunes can vaporize quickly like the value of a used Tesla on Cars and Bids.

Net net: Google, believe it or not, has been in its own Googley way trying to respond to its ChatGPT moment. What the company has been doing is interesting. However, unlike some of Google’s technical processes, the online information access world is able to change. Can Google? Will high school students and search engine optimization spam writers care? What about “axis of evil” outfits and their propaganda agencies? What about users who do not know when a machine-generated output is dead wrong? Google may not face an existential crisis, but the company definitely knows something is shaking the once-solid foundations of the buildings on Shoreline Drive.

Stephen E Arnold, January 16, 2023

Ah, Google Logic: The Internet Will Be Ruined If You Regulate Us!

January 16, 2023

I have to give Google credit for crazy logic and chutzpah if the information in “Google to SCOTUS: Liability for Promoting Terrorist Videos Will Ruin the Internet” is on the money. The write up presents as Truth this statement:

Google claimed that denying that Section 230 protections apply to YouTube’s recommendation engine would remove shields protecting all websites using algorithms to sort and surface relevant content—from search engines to online shopping websites. This, Google warned, would trigger “devastating spillover effects” that would devolve the Internet “into a disorganized mess and a litigation minefield”—which is exactly what Section 230 was designed to prevent. It seems that in Google’s view, a ruling against Google would transform the Internet into a dystopia where all websites and even individual users could potentially be sued for sharing links to content deemed offensive. In a statement, Google general counsel Halimah DeLaine Prado said that such liability would lead some bigger websites to overly censor content out of extreme caution, while websites with fewer resources would probably go the other direction and censor nothing.

I think this means the really super duper, magical Internet will be rendered even worse that some people think it is.

I must admit that Google has the money to hire people who will explain a potential revenue hit in catastrophic, life changing, universe disrupting lingo.

Let’s step back. Section 230 was a license to cut down the redwoods of publishing and cover the earth with synthetic grass. The effectiveness of the online ad model generated lots of dough, provided oodles of mouse pads and T shirts to certain people, and provided an easy way to destroy precision and recall in search.

Yep, a synthetic world. Why would Google see any type of legal or regulatory change as really bad … for Google. Interested in some potentially interesting content. Check out YouTube videos retrieved by entering the word “Nasheed.” Want some short cuts to commercial software. Run a query on YouTube for “sony vegas 19 crack.” Curious about the content that entertains some adults with weird tastes. Navigate to YouTube and run a query for “grade school swim parties.”

Alternatively one can navigate to Google.com and enter these queries for fun and excitement:

  • ammonium and urea nitrate combustion
  • afghan taliban recruitment requirements
  • principal components of methamphetamine

Other interesting queries are supported by Google. Why? Because the company abandoned the crazy idea that an editorial policy, published guidelines for acceptable content, and a lack of informed regulation makes it easy for Google to do whatever it wants.

Now that sense of entitlement and the tech wizard myth is fading. Google has a reason to be frightened. Right now the company is thrashing internally in Code Red mode, banking on the fact that OpenAI will not match Google’s progress in synthetic data, and sticking its talons into the dike in order to control leaks.

What are these leaks? How about cost control, personnel issues, the European Union and its regulators, online advertising competitors, and the perception that Google Search is maybe less interesting that the ChatGPT thing that one of the super analysts explained this way in superlatives and over the top lingo:

There is so much more to write about AI’s potential impact, but this Article is already plenty long. OpenAI is obviously the most interesting from a new company perspective: it is possible that OpenAI becomes the platform on which all other AI companies are built, which would ultimately mean the economic value of AI outside of OpenAI may be fairly modest; this is also the bull case for Google, as they [sic] would be the most well-palace to be the Microsoft to OpenAI’s AWS.

I have put in bold face the superlatives and categorical words and phrases used by the author of “AI and the Big Five.”

Now let’s step in more closely. Google’s appeal is an indication that Google is getting just a tad desperate. Sure it has billions. It is a giant business. But it is a company based on ad technology which is believed to have been inspired by Yahoo, Overture, GoTo ideas. I seem to recall that prior to the IPO a legal matter was resolved with that wildly erratic Yahoo crowd.

We are in the here an now. My hunch is that Google’s legal teams (note the plural) will be busy in 2023. It is not clear how much the company will have to pay and change to deal with a world in which Googley is not as exciting as the cheerleaders who want so much for a new world order of smart software.

What was my point about synthetic data? Stay tuned.

Stephen E Arnold, January 16, 2023

Smart Software KN Handle This Query for Kia

January 16, 2023

I read “People Can’t Read New Kia Logo, Resulting in 30,000 Monthly Searches for “KN Car“. The issue seems to be a new logo which when viewed by me seems to read the letter K and the letter N. Since I am a dinobaby, I assumed that I was at fault. But, no. The write up states:

All told, just 56% of the 1,062 survey participants nailed it, while 44% could not correctly identify the letters. Furthermore, 26% of respondents guessed it says “KN”—which results in roughly 30,000 online searches for “KN car” a month, according to Rerev.

I think this means that even the sharp eyed devils (my classification phrase for those in the GenX, GenY, and Millennial cohorts) cannot figure out the logo either.

I conjured up some of the marketing speak used to sell this new design to the Kia deciders:

  • “Daddy, I know you hired me, and I like my new logo. You must make it happen. Okay, daddy.” — A person hired via nepotism
  • “The dynamic lines make a bold statement about the thrust of the entire Kia line.” — From a bright eyed college graduate with a degree in business who is walking through the design’s advantages
  • Okay. Modern?” — The statement by the Song Ho-sung after listening to everyone in the logo meeting.

To me, just change the name Kia to KM. BYD Auto may be a bigger problem than a KN logo.

Stephen E Arnold, January 16, 2023

Billable Hours: The Practice of Time Fantasy

January 16, 2023

I am not sure how I ended up at a nuclear company in Washington, DC in the 1970s. I was stumbling along in a PhD program, fiddling around indexing poems for professors, and writing essays no one other than some PhD teaching the class would ever read. (Hey, come to think about it that’s the position I am in today. I write essays, and no one reads them. Progress? I hope not. I love mediocrity, and I am living in the Golden Age of meh and good enough.)

I recall arriving and learning from the VP of Human Resources that I had to keep track of my time. Hello? I worked on my own schedule, and I never paid attention to time. Wait. I did. I knew when the university’s computer center would be least populated by people struggling with IBM punch cards and green bar paper.

Now I have to record, according to Nancy Apple (I think that was her name): [a] The project number, [b] the task code, and [c] the number of minutes I worked on that project’s task. I pointed out that I would be shuttling around from government office to government office and then back to the Rockville administrative center and laboratory.

She explained that travel time had a code.  I would have a project number, a task code for sitting in traffic on the Beltway, and a watch. Fill in the blanks.

As you might imagine, part of the learning curve for me was keeping track of time. I sort of did this, but as I become more and more engaged in the work about which I cannot speak, I filled in the time sheets every week. Okay, okay. I would fill in the time sheets when someone in Accounting called me and said, “I need your time sheets. We have to bill the client tomorrow. I want the time sheets now.”

As I muddled through my professional career, I understood how people worked and created time fantasy sheets. The idea was to hit the billable hour target without getting an auditor to camp out in my office. I thought of my learnings when I read “A Woman Who Claimed She Was Wrongly Dismissed Was Ordered to Repay Her Former Employer about $2,000 for Misrepresenting Her Working Hours.”

The write up which may or may not be written by a human states:

Besse [the time fantasy enthusiast] met with her former employer on March 29 last year. In a video recording of the meeting shared with the tribunal, she said: “Clearly, I’ve plugged time to files that I didn’t touch and that wasn’t right or appropriate in any way or fashion, and I recognize that and so for that I’m really sorry.” Judge Megan Stewart concluded that TimeCamp [the employee monitoring software watching the time fantasist] “likely accurately recorded” Besse’s work activities. She ordered Besse to compensate her former employer for a 50-hour discrepancy between her timesheets and TimeCamp’s records. In total, Besse was ordered to pay Reach a total of C$2,603 ($1,949) to compensate for wages and other payments, as well as C$153 ($115) in costs.

But the key passage for me was this one:

In her judgment, Stewart wrote: “Given that trust and honesty are essential to an employment relationship, particularly in a remote-work environment where direct supervision is absent, I find Miss Besse’s misconduct led to an irreparable breakdown in her employment relationship with Reach and that dismissal was proportionate in the circumstances.”

Far be it from me to raise questions, but I do have one: “Do lawyers engage in time fantasy billing?”

Of course not, “trust and honesty are essential.”

That’s good to know. Now what about PR and SEO billings? What about consulting firm billings?

If the claw back angle worked for this employer-employee set up, 2023 will be thrilling for lawyers, who obviously will not engage in time fantasy billing. Trust and honesty, right?

Stephen E Arnold, January 16, 2023

Reproducibility: Academics and Smart Software Share a Quirk

January 15, 2023

I can understand why a human fakes data in a journal article or a grant document. Tenure and government money perhaps. I think I understand why smart software exhibits this same flaw. Humans put their thumbs (intentionally or inadvertently) put their thumbs on the button setting thresholds and computational sequences.

The key point is, “Which flaw producer is cheaper and faster: Human or code?” My hunch is that smart software wins because in the long run it cannot sue for discrimination, take vacations, and play table tennis at work. The downstream consequence may be that some humans get sicker or die. Let’s ask a hypothetical smart software engineer this question, “Do you care if your model and system causes harm?” I theorize that at least one of the software engineer wizards I know would say, “Not my problem.” The other would say, “Call 1-8-0-0-Y-O-U-W-I-S-H and file a complaint.”

Wowza.

The Reproducibility Issues That Haunt Health-Care AI” states:

a data scientist at Harvard Medical School in Boston, Massachusetts, acquired the ten best-performing algorithms and challenged them on a subset of the data used in the original competition. On these data, the algorithms topped out at 60–70% accuracy, Yu says. In some cases, they were effectively coin tosses1. “Almost all of these award-winning models failed miserably,” he [Kun-Hsing Yu, Harvard]  says. “That was kind of surprising to us.”

Wowza wowza.

Will smart software get better? Sure. More data. More better. Think of the start ups. Think of the upsides. Think positively.

I want to point out that smart software may raise an interesting issue: Are flaws inherent because of the humans who created the models and selected the data? Or, are the flaws inherent in the algorithmic procedures buried deep in the smart software?

A palpable desire exists and hopes to find and implement a technology that creates jobs, rejuices some venture activities, and allows the questionable idea that technology to solve problems and does not create new ones.

What’s the quirk humans and smart software share? Being wrong.

Stephen E Arnold, January 15, 2023

The Intelware Sector: In the News Again

January 13, 2023

It’s Friday the 13th. Bad luck day for Voyager Labs, an Israel-based intelware vendor. But maybe there is bad luck for Facebook or Meta or whatever the company calls itself. Will there be more bad luck for outfits chasing specialized software and services firms?

Maybe.

The number of people interested in the savvy software and systems which comprise Israel’s intelware industry is small. In fact, even among some of the law enforcement and intelligence professionals whom I have encountered over the years, awareness of the number of firms, their professional and social linkages, and the capabilities of these systems is modest. NSO Group became the poster company for how some of these systems can be used. Not long ago, the Brennan Center made available some documents obtained via legal means about a company called Voyager Labs.

Now the Guardian newspaper (now begging for dollars with blue and white pleas) has published “Meta Alleges Surveillance Firm Collected Data on 600,000 Users via Fake Accounts.” the main idea of the write up is that an intelware vendor created sock puppet accounts with phony names. Under these fake identities, the investigators gathered information. The write up refers to “fake accounts” and says:

The lawsuit in federal court in California details activities that Meta says it uncovered in July 2022, alleging that Voyager used surveillance software that relied on fake accounts to scrape data from Facebook and Instagram, as well as Twitter, YouTube, LinkedIn and Telegram. Voyager created and operated more than 38,000 fake Facebook accounts to collect information from more than 600,000 Facebook users, including posts, likes, friends lists, photos, comments and information from groups and pages, according to the complaint. The affected users included employees of non-profits, universities, media organizations, healthcare facilities, the US armed forces and local, state and federal government agencies, along with full-time parents, retirees and union members, Meta said in its filing.

Let’s think about this fake account thing. How difficult is it to create a fake account on a Facebook property. About eight years ago as a test, my team created a fake account for a dog — about eight years ago. Not once in those eight years was any attempt to verify the humanness or the dogness of the animal. The researcher (a special librarian in fact) set up the account and demonstrated to others on my research team how the Facebook sign up system worked or did not work in this particularly example. Once logged in, faithful and trusting Facebook seemed to keep our super user logged into the test computer. For all I know, Tess is still logged in with Facebook doggedly tracking her every move. Here’s Tess:

image

Tough to see that Tess is not a true Facebook type, isn’t it?

Is the accusation directed at Voyager Labs a big deal? From my point of view, no. The reason that intelware companies use Facebook is that Facebook makes it easy to create a fake account, exercises minimal administrative review of registered user, and prioritizes other activities.

I personally don’t know what Voyager Labs did or did not do. I don’t care. I do know that other firms providing intelware have the capability of setting up, managing, and automating some actions of accounts for either a real human, an investigative team, or another software component or system. (Sorry, I am not at liberty to name these outfits.)

Grab your Tum’s bottle and consider these points:

  1. What other companies in Israel offer similar alleged capabilities?
  2. Where and when were these alleged capabilities developed?
  3. What entities funded start ups to implement alleged capabilities?
  4. What other companies offer software and services which deliver similar alleged capabilities?
  5. When did Facebook discover that its own sign up systems had become a go to source of social action for these intelware systems?
  6. Why did Facebook ignore its sign up procedures failings?
  7. Are other countries developing and investing in similar systems with these alleged capabilities? If so, name a company in England, France, China, Germany, or the US?

These one-shot “intelware is bad” stories chop indiscriminately. The vendors get slashed. The social media companies look silly for having little interest in “real” identification of registrants. The licensees of intelware look bad because somehow investigations are somehow “wrong.” I think the media reporting on intelware look silly because the depth of the information on which they craft stories strikes me as shallow.

I am pointing out that a bit more diligence is required to understand the who, what, why, when, and where of specialized software and services. Let’s do some heavy lifting, folks.

Stephen E Arnold, January 13, 2023

Becoming Sort of Invisible

January 13, 2023

When it comes to spying on one’s citizens, China is second to none. But at least some surveillance tech can be thwarted with enough time, effort, and creativity, we learn from Vice in, “Chinese Students Invent Coat that Makes People Invisible to AI Security Cameras.” Reporter Koh Ewe describes China’s current surveillance situation:

“China boasts a notorious state-of-the-art state surveillance system that is known to infringe on the privacy of its citizens and target the regime’s political opponents. In 2019, the country was home to eight of the ten most surveilled cities in the world. Today, AI identification technologies are used by the government and companies alike, from identifying ‘suspicious’ Muslims in Xinjiang to discouraging children from late-night gaming.”

Yet four graduate students at China’s Wuhan University found a way to slip past one type of surveillance with their InvisDefense coat. Resembling any other fashion camouflage jacket, the garment includes thermal devices that emit different temperatures to skew cameras’ infrared thermal imaging. In tests using campus security cameras, the team reduced the AI’s accuracy by 57%. That number could have been higher if they did not also have to keep the coat from looking suspicious to human eyes. Nevertheless, it was enough to capture first prize at the Huwei Cup cybersecurity contest.

But wait, if the students were working to subvert state security, why compete in a high-profile competition? The team asserts it was actually working to help its beneficent rulers by identifying a weakness so it could be addressed. According to researcher Wei Hui, who designed the core algorithm:

“The fact that security cameras cannot detect the InvisDefense coat means that they are flawed. We are also working on this project to stimulate the development of existing machine vision technology, because we’re basically finding loophole.”

And yet, Wei also stated,

“Security cameras using AI technology are everywhere. They pervade our lives. Our privacy is exposed under machine vision. We designed this product to counter malicious detection, to protect people’s privacy and safety in certain circumstances.”

Hmm. We learn the coat will be for sale to the tune of ¥500 (about $71). We are sure al list of those who purchase such a garment will be helpful, particularly to the Chinese government.

Cynthia Murrell, January 13, 2023

Social Media: Great for Surveillance, Not So Great for Democracy

January 13, 2023

Duh? Friday the 13th.

Respected polling organization the Pew Research Center studied the impact and views that social media has on democratic nations. According to the recent study: “Social Media Seen As Mostly Good For Democracy Across Many Nations, Except US Is A Major Outlier” the US does not like social media meddling with its politics.

The majority of the polled countries believed social media affected democracy positively and negatively. The US is a large outlier, however, because only 34% of its citizens viewed social media as beneficial while a whopping 64% believed the opposite. The US is not the only one that considered social media to cause division in politics:

“Even in countries where assessments of social media’s impact are largely positive, most believe it has had some pernicious effects – in particular, it has led to manipulation and division within societies. A median of 84% across the 19 countries surveyed believe access to the internet and social media have made people easier to manipulate with false information and rumors. A recent analysis of the same survey shows that a median of 70% across the 19 nations consider the spread of false information online to be a major threat, second only to climate change on a list of global threats.

Additionally, a median of 65% think it has made people more divided in their political opinions. More than four-in-ten say it has made people less civil in how they talk about politics (only about a quarter say it has made people more civil).”

Despite the US being an outlier, other nations used social media as a tool to be informed about news, helped raise public awareness, and allowed citizens to express themselves.

The majority of Americans who negatively viewed social media were affiliated with the Republican Party or were Independents. Democrats and their leaners were less likely to think social media is a bad influence. Younger people also believe social media is more beneficial than older generations.

Social media is another tool created by humans that can be good and bad. Metaphorically it is like a gun.

Whitney Grace, January 13, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta