What Makes the Web Slow? Really Slow?

January 28, 2021

I read “We Rendered a Million Web Pages to Find Out What Makes the Web Slow.” My first reaction was the East Coast Internet outage which ruined some Type A workers’ day. I can hear the howls, “Mommy, I can’t attend class, our Internet is broken again.”

Here’s a passage from the “Rendered a Million Web Pages” which I found interesting:

Internet commentators are fond of saying that correlation does not equal causation, and indeed we can’t get at causality directly with these models. Great caution should be exercised when interpreting the coefficients, particularly because a lot confounding factors may be involved. However, there’s certainly enough there to make you go “hmm”.

Yep, I went “hmm.” But for these reasons:

  • Ad load times slow down my Web experiences. Don’t you love those white page hung ads on the YouTube or the wonky baloney on the Daily Mail?
  • How about crappy Internet service providers?
  • Are you thrilled with cache misses?
  • Pages stuffed full of trackers, bugs, codes, and spammy SEO stuff.

Hmm, indeed.

Stephen E Arnold, January 28, 2021

The Silicon Valley Way: Working 16 Hour Days in Four Hours?

January 26, 2021

Years ago I worked at a couple of outfits which expected professionals to work more than eight hours a day. At the nuclear outfit, those with an office, a helper (that used to be called a “secretary”), and ill-defined but generally complicated tasks were to arrive about 8 am and head out about six pm. At the blue chip consulting firm, most people were out of the office during “regular” working hours; that is, 9 am to 5 pm. Client visits, meetings, and travel were day work. Then after 5 pm or whenever before the next day began professionals had to write proposals, review proposals, develop time and cost estimates, go to meetings with superiors, and field odd ball phone calls (no mobiles, thumb typers. These phones had buttons, lights, and spectacular weird interfaces). During the interview process at the consulting outfit, sleek recruiters in face-to-face meetings would reference 60 hour work weeks. That was a clue, but one often had to show up early Saturday morning to perform work. The hardy would show up on Sunday afternoon to catch up.

Imagine my reaction when I read “Report: One Third of Tech Workers Admit to Working Only 3 to 4 Hours a Day.” I learned:

  • 31% of professionals from 42 tech companies…said they’re only putting in between three and four hours a day
  • 27% of tech professionals said they work five to six hours a day
  • 11% reported only working one to two hours per day
  • 30% said they work between seven and 10 hours per day.

The data come from an anonymous survey and the statistical procedures were not revealed. Hence, the data may be wonky.

One point is highly suggestive. The 30 percent who do more are the high performers. With the outstanding management talent at high technology companies, why aren’t these firms terminating the under performing 70 percent? (Oh, right some outfits did try the GE way. Outstanding.)

My question is, “For the 30 percent who are high performers, why are you working for a company. Become a contractor or an expert consultant. You can use that old school Type A behavior for yourself?”

Economic incentives? The thrill of super spreader events on Friday afternoon when beer is provided? Student loans to repay? Work is life?

I interpret the data another way. Technology businesses have a management challenge. Measuring code productivity, the value of a technology insight, and the honing of an algorithm require providing digital toys, truisms about pushing decisions down, and ignoring the craziness resulting from an engineer acting without oversight.

Need examples? Insider security threats, a failure to manage in a responsible manner, and a heads down effort to extract maximum revenue from customers.

In short, the work ethic quantified.

Stephen E Arnold, January 26, 2021

Computing: Things Go Better with Light

January 22, 2021

Electricity is too slow at matrix math for IBM. Now, announces ZDNet, “IBM Is Using Light, Instead of Electricity, to Create Ultra-Fast Computing.” The shift could be especially important to the future of self-driving automobiles, where ultra-fast processing is needed to avoid collisions at high travel speeds. Reporter Daphne Leprince-Ringuet writes:

“Although the device has only been tested at a small scale, the report suggests that as the processor develops, it could achieve one thousand trillion multiply-accumulate (MAC) operations per second and per square-millimeter – according to the scientists, that is two to three orders more than ‘state-of-the-art AI processors’ that rely on electrical signals.”

IBM researchers have been working toward this goal for some time. Last year, the company demonstrated the tech’s potential through in-memory computing with devices that performed computational tasks using light. Now they have created what they call a photonic tensor core they say is particularly suited for deep-learning applications. The article continues:

“The most significant advantage that light-based circuits have over their electronic counterparts is never-before-seen speed. Leveraging optical physics, the technology developed by IBM can run complex operations in parallel in a single core, using different optical wavelengths for each calculation. Combined with in-memory computing, IBM’s scientists achieved ultra-low latency that is yet to be matched by electrical circuits. For applications that require very low latency, therefore, the speed of photonic processing could make a big difference. … With its ability to perform several operations simultaneously, the light-based processor developed by IBM also requires much less compute density.”

That is another consideration for self-driving vehicles—the smaller the hardware the better. But this technology is far from ready for the road. IBM still must evaluate how it can be integrated for end-to-end performance. The potential to trade electricity for light is an interesting development; we are curious to see how this unfolds.

Cynthia Murrell, January 22, 2021

Technology and Exponential Costs: MBAs Confront a Painful Online Reality

January 20, 2021

The article “When Costs Are Nonlinear, Keep It Small” addresses exponential costs in terms of technology in general. A business uses software; costs can grow exponentially. I completely agree. The author, one Jessitron, states:

When costs increase nonlinearly with delay or batch size, larger batches are not more efficient…. The changes interact, and so batching them up increases the cost of the batch by more than the cost of the change you’re adding. Batching is less efficient.

I want to use this observation to explain why online information services find themselves in a cost swamp. The consequence of the exponential demand for resources are:

  • Management needs cash and must put more pressure on sales professionals to close deals. Pressure leads to overstatement and clever workarounds. Once the deal is closed an an invoice sent, the sales professional moves on either to another company or to another customer.
  • Marketing gets the message that sales are number one, so the art history majors and former hospitality workers crank out hyperbole-stuffed messages. (Post SolarWinds check out the tone deaf pitching of security systems which failed to notice the breach. Sales are needed, and marketing is the cheerful servant of the organization.)
  • Fancy dancing with the books. The number of online companies booking business before cash arrives is probably infinitesimal, right? But there are other ways of producing money; for example, the public information about the activities of Fast Search & Transfer provide and example. Other examples are available.
  • Go back to the funders. An enthusiastic group of clear eyed, good school, sincere individuals explain to an equally clear eyed, good school, sincere individuals why money should be invested. The recipients pray for a big sale or other financial home run because repaying the money is what might be called a long shot.

These activities are often a result of the truths that Jessitron explains and illustrates with annotated drawings.

In the online world, when something goes wrong, money must be spent. The amount required is unknown until the wrong is righted. How much is a new product? Same deal. The amount of cash required is unknown, yet cash must be spent.

Exponential costs are part of the deal. The article suggests that changes be kept small; that is, changing many things increases the likelihood of problems. Problems require cash. A cycle, just not so virtuous.

Online services live with exponential costs. Thus, the online vendors have zero choice but to do the type of thinking which has created some of the more fascinating ethical, financial, political, and technical tactical minefields dotting the datasphere.

Useful paper, Jessitron. Keep it small.

Stephen E Arnold, January 20, 2021

Intel Reminds Apple That It Is a Horse Around Company

January 19, 2021

I read “Intel Suggests It Will Wait for New CEO to Make Critical Decisions to Fix Manufacturing Crisis.” The headline suggests that Intel cannot manufacture chips as it did in the glory days of Silicon Valley. Wow, who knew?

There are a couple of other gems in this “real” news story too; to wit:

Intel allegedly embraces this view of Apple, another small outfit in the computing business:

“We have to deliver better products to the PC ecosystem than anypossible thing that a lifestyle company in Cupertino” makes,Gelsinger told employees Thursday. That’s a derisive, ifgood-natured, reference to Apple and the location of its corporateheadquarters.

Yep, lifestyle. Apple, I would remind Intel, has managed to enter the chip business without any of the quantum computing lynchpin baloney like the Horse Ridge innovation. That’s a technical achievement which strikes me as a combination of marketing, jargon, and horse feathers. Maybe a horse collar or a saddle blanket?

Another interesting passage asserts:

In a note to clients after Gelsinger’s hiring [the new CEO], Raymond James analystChris Caso said Intel doesn’t have time to deliberate.

Okay, time. There’s the ever chipper AMD, the Qualcomm outfit, a couple of eager beavers in lands which favor zesty spices. Oh, yes, and there’s the Apple operation, which sells products from pushcarts.

The article details the failures and fantasies of a company which has created Horse Ridge. Unfortunately instead of a stallion, the computational cowboys are riding Norwegian Fjord horses in the chip derbies.

Stephen E Arnold, January 19, 2021

Yikes! Fund People, Not Projects

January 18, 2021

Fund People, Not Projects III: The Newton Hypothesis; Is Science Done by a Small Elite?” addresses innovation, procurement assumptions, and MBA chestnuts. The write up is long, running about 6,300 words. Here’s my summary of the argument in the research paper:

You bet your bippy, pilgrim.

Here’s the academic version of my summary:

The Newton hypothesis seems true, as far as citations are concerned: science is advanced by a small elite. This is not just “Einstein-level” breakthroughs, the small elite may not be 0.01% but 1-5% of the total number of practicing scientists. Even 10% would still cohere with the idea of scientific elitism. Citations at least on a first pass do seem to correlate with “good science” both casually (Highly cited classic papers) and by assessment of peers (Nobel prize panels; Nobel-winning papers are highly cited, and cite highly cited research).

The write up also explains why some technology organizations decline; for example, the Google. The reason is that really good people leave for greener pastures either mentally or physically. The result? Gmail goes down, Intel can’t make chips, and IBM can’t get Watson to deliver that mythical billion dollar business. Common sense, yes. Will significant change take place in staff management, procurement, or MBA thinking about innovation?

Nope.

Stephen E Arnold, January 18, 2021

More No Code and Low Code Action

January 11, 2021

I suppose AI is mainstream now, for here we have a version for users who are not computer scientists. “Blaize Launches Open, Code-Free AI Software Platform,” we learn from Australia’s IT Brief. No code, perfect for art history majors and MBA degree holders. We’re told the platform, named AI Studio, carries the user from the spark of inspiration, through deployment, and into software management. It even includes a digital assistant which, sensibly, answers to the phrase “Hey Blaize.” The write-up lists the platform’s features:

  • “Code-free assistive user interface (UI)
  • Workflow support for open standards including (ONNX, OpenVX, containers, Python, and GStreamer). Support for these open standards allows AI Studio to deploy to any hardware that fully supports the standards.
  • Marketplaces collaboration allows users to discover models, data and complete applications from anywhere – public or private – and collaborate continuously to build and deploy high-quality AI applications. It provides support for open public models, data marketplaces and repositories, and provides connectivity and infrastructure to host private marketplaces.
  • User friendly application development workflow, with optimized models for specific datasets and use cases. “AI Studio’s unique Transfer Learning feature quickly retrains imported models for the user’s data and use case. Blaize edge-aware optimization tool, NetDeploy, automatically optimizes the models to the user’s specific accuracy and performance needs.”
  • Additional MLOps and DevOps features, including deployment, management, and monitoring of edge AI applications”

AI Studio should be available to the general public in the first quarter of next year, though a few select customers can get their hands on it now. Located in El Dorado Hills, California, Blaize was founded in 2010 as Thinci. We do not know why the company changed its name in 2019; perhaps they simply did not like the sound of “Hey Thinci.”

Cynthia Murrell, January 11, 2021

The Mainframe Wants to Be Young Again

January 7, 2021

I know that the idea of a mainframe being young is not widespread. I am not certain that a TikTok video has been created to demonstrate the spring in the septuagenarian’s step. I learned about the mainframe’s most recent aspirations in “Big Mainframe Computing.” I noted this statement:

BMC is now aiming to help build what it (and everybody else in the industry) is calling the autonomous digital enterprise but putting the artificial intelligence (AI) in mAInframe. The company now refers to the joint BMC Automated Mainframe Intelligence (AMI) and Compuware portfolios… and this is the world of Ops plus AI = AIOps.

I quite like the realization that the letters “ai” appear in the word “mainframe.” From my perspective, innovations are chugging along. Companies like Apple, AMD, and nVidia are crafting solutions likely to create additional computing alternatives for “smart software.”

Would you pay to attend a ballet performed by the people in pharmaceutical advertisements on cable nightly news programs?

I know I would not because I am 77 and know that what was possible in the 1960s is a bit of a challenge.

Stephen E Arnold, January 7, 2021

Tape for Back Ups: What about Restore and a Few Other Trivial Questions?

December 31, 2020

I read “Fujifilm Created a Magnetic Tape That Can Restore 580 Terabytes.” Amazing. Remarkable. Incredible. Tape!

The write up reports:

The breakthrough, developed jointly with IBM Research, uses a new magnetic particle called Strontium Ferrite (SrFe), commonly used as a raw material for making motor magnets. Fujifilm has been investigating Strontium Ferrite as a possible successor to Barium Ferrite (BaFe), which is the leading material today.

Yep, strontium. Definitely a favorite among some laboring at LANL, Oak Ridge, and Argonne as well as among home experimenters with highly chemical reactive substances. Plus, there’s IBM in the mix. Yep, the Watson folk. Greetings, Blue folk.

I learned:

To put 580 terabytes in perspective, it’s roughly the equivalent of 120,000 DVDs or 786,977 CDs — IBM notes that stacking that many CDs would result in a tower 3,097 feet (944m) tall, or taller than Burj Kalifa, the world’s tallest building. All that data can now fit in a tape cartridge in the palm of your hand.

And how long will this wonder persist as usable media? 30 years.

I do have a couple of questions:

  • Write speed?
  • Read speed?
  • Actual restore speed for 500 terabytes (there is overhead on these puppies, right?)?
  • Mechanism to locate the specific blocks required for the restore?
  • In use error rate?
  • Storage environment required? (Faraday room, cavern in Kansas, in a pile on a metal rack in the junk closet?)
  • What’s the cost in fully loaded dollars for the software, device, and staff time for write and restore?
  • What’s the tensile strength of the medium in 29 years?

Ah, but there are no answers in the write up.

There you go. Let’s ask Watson or someone who has reported to a client, “Your tape backups are unreadable.” Ever heard that before? I sure have.

Stephen E Arnold, December 31, 2020

Plasmons: The Value of Pencil and Paper

December 18, 2020

I read “Physicists Solve Geometrical Puzzle in Electromagnetism.” I am not sure the title captures what the researchers discovered, but, hey, this is the era of the thumbtyper. Close enough for horse shoes. The technology focuses on the movement of electrons. The insights gleaned from the research will have some influence on new types of materials. But the write up contains a gem of an insight. Here’s the quote:

Guillaume Weick from the University of Strasbourg adds: “There is a trend for increasing reliance on heavy duty computations in order to describe plasmonic systems. In our throwback work, we reveal humble pen-and-paper calculations can still explain intriguing phenomena at the forefront of metamaterials research”.

Yes, indeed. Hands on, erasers, cross outs, and elbow grease have value.

Stephen E Arnold, December 18, 2020

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta