Quantum Computing: A Nasty Business

March 3, 2021

In a PhD program, successful candidates push the boundaries of knowledge and change the world for the better. Sometimes. One illustration of this happy outcome is the case of Zak Romaszko at the University of Sussex, who contributed to the school’s ion trap quantum computer project. Robaszko is now working at his professor’s spin-off company Universal Quantum on commercialization of the tech to create large-scale quantum computers. Bravo!

Unfortunately, not all PhD programs are crucibles of such success stories. One in particular appears to be just the opposite, as described in “A Dishonest, Indifferent, and Toxic Culture” posted at the Huixiang Voice. The blog is dedicated to covering the heartbreaking experience of PhD candidate Huixiang Chen, who was studying at the University of Florida’s department of Electrical and Computer Engineering when he took his own life. The note Chen left behind indicated the reason, at least in part, was the pressure put on him by his advisor to go along with a fraudulent peer-review process.

We learn:

“It has been 20 months since the tragedy that a Ph.D. candidate from the University of Florida committed suicide, accusing his advisor coerce him into academic misconduct. Our latest article dropped a bump into the academic world by exposing the evidence of those academic misconduct. The Nature Index followed up with an in-depth report with comments from scientists and academic organizations worldwide expressing their shock and deep concerns about this scandal that happened at the University of Florida.”

A joint committee of the academic publisher Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) investigated the matter and found substance in the allegations. ACM has imposed a 15-year ban on participation in any ACM Conference or Publication on the offenders, the most severe penalty the organization has ever imposed. The post continues:

“The conclusion finally confirmed two important accusations listed in Huixiang Chen’s suicide note that:
1) The review process for his ISCA-2019 paper was broken, and most of the reviewers of the paper are ‘friends’ of his advisor Dr. Tao Li. The review process became organized and colluded academic fraud:
2)After recognizing that there are severe problems in his ISCA-2019 paper, Huixiang Chen was coerced by his advisor Dr. Tao Li to proceed with a submission despite that Huixiang Chen repeatedly expressed concerns about the correctness of the results reported in work, which led to a strong conscience condemnation and caused the suicide.
“Finally, the paper with academic misconduct got retracted by ACM as Huixiang’s last wish.”

Chen hoped the revelations he left behind would lead to a change in the world; perhaps they will. The problem, though, is much larger than the culture at one university. Peer reviewed publications have become home to punitive behavior, non-reproducible results, and bureaucratic pressure. Perhaps it is time to find another way to review and share academic findings? Google’s AI ethics department may have some thoughts on academic scope and research reviews.

Cynthia Murrell, March 3, 2021

MIT Report about Deloitte Omits One Useful Item of Information

February 1, 2021

This is not big deal. Big government software project does not work. Yo, anyone remember DCGS, the Obama era health site, the reinvigoration of the IRS systems, et al? Guess not. The outfit which accepted money from Mr. Epstein and is now explaining how a faculty member could possibly be ensnared in an international intellectual incident is now putting Deloitte in its place.

Yeah, okay. A blue chip outfit takes a job and – surprise – the software does not work. Who is the bad actor? The group which wrote the statement of work, the COTR, the assorted government and Deloitte professionals trying to make government software super duper? Why not toss in the 18F, the Googler involved in government digitization, and the nifty oversight board for the CDC itself?

The write up “What Went Wrong with America’s $44 Million Vaccine Data System?” analyzes this all-too-common standard operating result from big technology projects. I noted:

So early in the pandemic, the CDC outlined the need for a system that could handle a mass vaccination campaign, once shots were approved. It wanted to streamline the whole thing: sign-ups, scheduling, inventory tracking, and immunization reporting. In May, it gave the task to consulting company Deloitte, a huge federal contractor, with a $16 million no-bid contract to manage “Covid-19 vaccine distribution and administration tracking.” In December, Deloitte snagged another $28 million for the project, again with no competition. The contract specifies that the award could go as high as $32 million, leaving taxpayers with a bill between $44 and $48 million. Why was Deloitte awarded the project on a no-bid basis? The contracts claim the company was the only “responsible source” to build the tool.

Yep, the fault was the procurement process. That’s a surprise?

The MIT write up relishes its insights about government procurement; for example:

“Nobody wants to hear about it, because it sounds really complicated and boring, but the more you unpeel the onion of why all government systems suck, the more you realize it’s the procurement process,” says Hana Schank, the director of strategy for public-interest technology at the think tank New America.  The explanation for how Deloitte could be the only approved source for a product like VAMS, despite having no direct experience in the field, comes down to onerous federal contracting requirements, Schank says. They often require a company to have a long history of federal contracts, which blocks smaller or newer companies that might be a better fit for the task.

And the fix? None offered. That’s helpful.

There is one item of information missing from the write up; specifically the answer to this question:

How many graduates of MIT worked on this project?

My hunch is that the culprit begins with the education and expertise of the individuals involved. The US government procurement process is a challenge, but aren’t institutions training the people in consulting firms and working government agencies supposed to recognize a problem and provide an education to remediate the issue. Sure, it takes time, but government procurement has been a tangle for decades, yet outfits like MIT are eager to ignore the responsibility they have to turn out graduates who solve problems, not create them.

Now about that Epstein and Chinese alleged double dipping thing? Oh, right. Not our job?

Consistent, just like government procurement processes it seems to me.

Stephen E Arnold, February 1, 2021

The Silicon Valley Way: Working 16 Hour Days in Four Hours?

January 26, 2021

Years ago I worked at a couple of outfits which expected professionals to work more than eight hours a day. At the nuclear outfit, those with an office, a helper (that used to be called a “secretary”), and ill-defined but generally complicated tasks were to arrive about 8 am and head out about six pm. At the blue chip consulting firm, most people were out of the office during “regular” working hours; that is, 9 am to 5 pm. Client visits, meetings, and travel were day work. Then after 5 pm or whenever before the next day began professionals had to write proposals, review proposals, develop time and cost estimates, go to meetings with superiors, and field odd ball phone calls (no mobiles, thumb typers. These phones had buttons, lights, and spectacular weird interfaces). During the interview process at the consulting outfit, sleek recruiters in face-to-face meetings would reference 60 hour work weeks. That was a clue, but one often had to show up early Saturday morning to perform work. The hardy would show up on Sunday afternoon to catch up.

Imagine my reaction when I read “Report: One Third of Tech Workers Admit to Working Only 3 to 4 Hours a Day.” I learned:

  • 31% of professionals from 42 tech companies…said they’re only putting in between three and four hours a day
  • 27% of tech professionals said they work five to six hours a day
  • 11% reported only working one to two hours per day
  • 30% said they work between seven and 10 hours per day.

The data come from an anonymous survey and the statistical procedures were not revealed. Hence, the data may be wonky.

One point is highly suggestive. The 30 percent who do more are the high performers. With the outstanding management talent at high technology companies, why aren’t these firms terminating the under performing 70 percent? (Oh, right some outfits did try the GE way. Outstanding.)

My question is, “For the 30 percent who are high performers, why are you working for a company. Become a contractor or an expert consultant. You can use that old school Type A behavior for yourself?”

Economic incentives? The thrill of super spreader events on Friday afternoon when beer is provided? Student loans to repay? Work is life?

I interpret the data another way. Technology businesses have a management challenge. Measuring code productivity, the value of a technology insight, and the honing of an algorithm require providing digital toys, truisms about pushing decisions down, and ignoring the craziness resulting from an engineer acting without oversight.

Need examples? Insider security threats, a failure to manage in a responsible manner, and a heads down effort to extract maximum revenue from customers.

In short, the work ethic quantified.

Stephen E Arnold, January 26, 2021

Digital Humanities Is Data Analytics For English Majors

January 4, 2021

Computer science and the humanities are on separate ends of the education spectrum. The two disciplines do not often mix, but when they do wonderful things happen. The Economist shares a story about book and religious nerds using data analytics to uncover correlations in literature: “How Data Analysis Can Enrich The Humanities.”

The article explains how a Catholic priest and literary experts used data analysis technology from punch card systems to modern software to examine writing styles. The data scientists teamed with literary experts discovered correlations between authors, time periods, vocabulary, and character descriptions.

The discoveries point to how science and the humanities can team up to find new and amazing relationships in topics that have been picked to death by scholars. It creates new avenues for discussion. It also demonstrates how science can enhance the humanities, but it also provides much needed data for AI experimentation. One other thing is brings up is how there are disparities between the fields:

“However, little evidence yet exists that the burgeoning field of digital humanities is bankrupting the world of ink-stained books. Since the NEH set up an office for the discipline in 2008, it has received just $60m of its $1.6bn kitty. Indeed, reuniting the humanities with sciences might protect their future. Dame Marina Warner, president of the Royal Society of Literature in London, points out that part of the problem is that “we’ve driven a great barrier” between the arts and STEM subjects. This separation risks portraying the humanities as a trivial pursuit, rather than a necessary complement to scientific learning.”

It is important that science and the humanities cross over. In order for science to even start, people must imagine the impossible. Science makes imagination reality.

Whitney Grace, January 5, 2021

Why Google Misses Opportunities: A Report Delivered by the Tweeter Thing

January 1, 2021

Here’s a Twitter thread from a Xoogler who appears to combine the best of the thumb typer generation with the bittersweet recognition of Google’s defective DNA. In the thread, the Xoogler allegedly a real person named Hemant Mohapatra reveals some nuggets about the high school science club approach to business on steroids; for example:

  • Jargon. Did you know that GTM seems to mean either “global traffic management” or “Google tag manager” or Guatamala? Tip: Think global traffic management an a Google’s Achilles’ heel.
  • Mature reaction when a competitor aced out the GOOG. The approach makes use of throwing chairs. Yep, high school behavior.
  • Lots of firsts but a track record of not delivering what the customer wanted. Great at training, not so good in the actual game I concluded.
  • Professionalism. A customer told the Google whiz kids: “You folks just throw code over the fence.” (There’s the “throw” word again.)
  • Chaotic branding. (It’s good to know even Googlers do not know what the name of a product or service is. So when a poobah from Google testifies and says, “I don’t know” in response to a question, that may be a truthful statement.

Did the Xoogler take some learnings from the Google experience? Sure did. Here’s the key tweeter thing message:

My google exp reinforced a few learnings for me: (1) consumers buy products; enterprises buy platforms. (2) distribution advantages overtake product / tech advantages and (3) companies that reach PMF & then under-invest in S&M risk staying niche players or worse: get taken down.

The smartest people in the world? Sure, just losing out to Amazon and Microsoft now. What’s this tell us. Maybe bad genes, messed up DNA, a failure to leave the mentality of the high school science club behind?

Stephen E Arnold, January 1, 2021

The Apple Covid Party App: One Minor and Probably Irrelevant Question

December 31, 2020

I am not into Apple or any other Sillycon Valley outfit. I am aware of the yip yap about curation policies, editorial control, and delivering a good experience. Yadda yadda or as the overtalkers on Pivot say, “yoga babble.”

I read “Apple Pulls App That Promoted Secret Parties During Ongoing Pandemic.” The write explains that Apple removed the app from the lucrative App Store.

But there is one minor and probably irrelevant question which arises:

With all the effort Apple puts into curation, how did the app make it to the App Store?

My hunch is that talk, handwaving, and posturing are more important than evaluating, checking, and considering apps. But that’s just a hunch. Reality is probably different.

Stephen E Arnold, December 31, 2020

Failure: The Reasons Are Piling Up

December 28, 2020

Years ago I read a monograph by some big wig in Europe. As I recall, that short book boiled down failure to one statement: “Little things add up.” The book contained a number of interesting industrial examples. “How Complex Systems Fail” is a modern take on the failure of systems. The author has cataloged 18 reasons. Here are three of the reasons, and it may be worth your time to check out the other 15.

  • Complex systems contain changing mixtures of failures latent within them.
  • Change introduces new forms of failure.
  • Failure free operations require experience with failure.

I am not an expert on failure although I have failed. I have had a couple of wins, but the majority of my efforts are total, complete flops. I am not sure I have learned anything. The witness to my ineptitude is this Web log.

Nevertheless, I would like to add a couple of additional reasons for failure:

  • Those involved deny the likelihood of failure. I suppose this is just the old “know thyself” thing. Thumb typers seem to be even more unaware of risks than I, the old admitted failure.
  • Impending failure emits signals which those involved cannot hear or actively choose to ignore.

The list of reasons will be expanded by an MBA pursuing a career in consulting. That, in itself, is one of those failure signals.

Little things still add up. Knowing about these little things is often difficult. I am not away of a hearing aid to assist whiz kids in detecting the exciting moment when the digital construct goes boom.

Stephen E Arnold, December 28, 2020

Does Open Source Create Open Doors?

December 21, 2020

Here’s an interesting question I asked on a phone call on Sunday, December 20, 2020: “How many cyber security firms rely on open source software?”

Give up?

As far as my research team has been able to determine, no study is available to us to answer the question. I told the team that based on comments made in presentations, at lectures, and in booth demonstrations at law enforcement and intelligence conferences, most of the firms do. Whether it is a utility function like Elasticsearch or a component (code or library) that detects malicious traffic, open source is the go-to source.

The reasons are not far to seek and include:

  • Grabbing open source code is easy
  • Open source software is usually less costly than a proprietary commercial tool
  • Licensing allows some fancy dancing
  • Using what’s readily available and maintained by a magical community of one, two or three people is quick
  • Assuming that the open source code is “safe”; that is, not malicious.

My question was prompted after I read “How US Agencies’ Trust in Untested Software Opened the Door to Hackers.” The write up states:

The federal government conducts only cursory security inspections of the software it buys from private companies for a wide range of activities, from managing databases to operating internal chat applications.

That write up ignores the open source components commercial cyber security firms use. The reason many of the services look and function in a similar manner is due to a reliance on open source methods as well as the nine or 10 work horse algorithms taught in university engineering programs.

What’s the result? A SolarWinds type of challenge. No one knows the scope, no one knows the optimal remediation path, and no one knows how many vulnerabilities exist and are actively being exploited.

Here’s another question, “How many of the whiz kids working in US government agencies communicate the exact process for selecting, vetting, and implementing open source components directly (via 18f type projects) or from vendors of proprietary cyber security software?”

Stephen E Arnold, December 21, 2020

Modern Times: How Easy Is It to Control Thumbtypers? Easy

December 8, 2020

Navigate to “The Modern World Has Finally Become Too Complex for Any of Us to Understand.” You may want to read the listing of examples about how complex life has become. Note this sentence which refers to how humanoids accept computer outputs without much thinking:

What was fascinating — and slightly unnerving — was how these instructions were accepted and complied with without question, by skilled professionals, without any explanation of the decision processes that were behind them.

Let’s assume the write up is semi-accurate. The example may provide some insight about the influence, possibly power, online systems have which shape information. In my experience, most people accept the “computer” as being correct. Years ago in a lab testing nuclear materials, I noticed that two technicians were arguing about the output from a mass spec machine. I was with my friend (Dr. James Terwilliger). We watched the two technicians for a moment and noted that when human experience conflicts with a machine output, the discussion becomes frustrating for the humans. The resolution to the problem was to test the sample in another mass spec machine. Was this a fix? Nope.

The behavior demonstrated how humans flounder to deal with machine outputs. These are either accepted or result in behavior that will not answer the question: “Okay, which output is accurate?”

The incident illustrates that humans may not like to take guidance from another human, but guidance from a “computer” is just fine. And when the output conflicts with experience, humans appear to manifest some odd ball behavior.

Here’ are two questions:

How does a user / consumer of online information know if the output is in context, accurate, verifiable?

If the output is not, then what does the human do? More research? Experimentation? Ask a street person? Guess? Repeat the process (like the confused lab techs)?

This is not complexity; this is why those who own certain widely used systems control human thought processes and behaviors. Is this how society works? Are one percenters exempt from the phenomenon? Is this smart software or malleable human behavior?

Stephen E Arnold, December 8, 2020

Big Brother Might Not Be Looking Over Our Shoulders

October 6, 2020

Ever since George Orwell wrote his dystopian classic 1984, the metaphor of an Orwellian society entered the cultural zeitgeist. As more cameras and recording equipment become commonplace, the Orwellian metaphor becomes a reality or at least we are led to believe. Venture Beat explains that might not be true in the article, “AI Weekly: A Biometric Surveillance Stat Is Not Inevitable, Says AI Now Institute.”

According to the article, we have been conditioned to believe that an Orwellian a.k.a. surveillance capitalistic society is inevitable so we do not fight companies and governments that implement the technology. During the current COVID-19 pandemic, the idea is easy to believe, especially as biometric technology becomes in demand.

Biometric data collection and surveillance is new and there are still gray areas when it comes to legal, ethical, and safe usage for biometrics. AI Now wrote a report that examines biometrics’ challenges, their importance, and solutions. Eight real life case studies are referenced such as police use of facial recognition technology in the United States and United Kingdom, centralizing biometric data in Australia and India, biometric surveillance in schools, and others.

An understanding of biometric challenges and solutions are important for everyone. There are currently barriers that hinder a greater understanding, especially a basic definition of “biometrics.” Some want to pause using these systems until laws are reformed, while others want to ban biometric technology:

“To effectively regulate the technology, average citizens, private companies, and governments need to fully understand data-powered systems that involve biometrics and their inherent tradeoffs. The report suggests that ‘any infringement of privacy or data-protection rights be necessary and strike the appropriate balance between the means used and the intended objective.’ Such proportionality also means ensuring a ‘right to privacy is balanced against a competing right or public interest.’”

How and why biometric technology will be used depends on the situation. The report used an example of Swedish schools that implemented facial recognition technology to track students’ attendance. Swedish authorities feared that the technology would creep on students and teachers, gathering rich data on them. They wondered how else this “creepy” data could be used. On the flipside, the same facial recognition technology can be used to monitor for identifying unauthorized people on school campuses as well as for weapons. Both concerns are valid, but which side is correct?

Regulation is needed but might happen only after biometric systems are deployed. India’s Aadhaar biometric identity project for every citizen (photographs, fingerprints, and iris scans). Aadhaar ran for twelve years without legal guardrails, but when Indian lawmakers could have repaired problems they made laws that skirted them.

Biometric systems will be implemented, but human error prevents them from being Orwellian.

Whitney Grace, October 6, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta