Posts

Showing posts from November, 2025

Airbus A320 — caused by a critical software bug

The recent trouble with the Airbus A320 — caused by a critical software bug — shows why rigorous testing is non-negotiable in complex systems. The “A320 bug” — linked to the aircraft’s ELAC (elevator/aileron control) software — was triggered by intense solar radiation that can corrupt flight-control data. As a result, dozens of international carriers were forced to ground or recall thousands of jets: it’s arguably the largest aircraft-fleet recall in aviation history. Had the faulty version of the software been subjected to more exhaustive testing — including scenarios such as solar-radiation-induced data corruption — the vulnerability might have been caught before commercial deployment. A robust test suite would have included “edge cases” (rare but plausible events) to stress-test the code under extreme conditions. The root cause: under periods of “intense solar radiation” (solar flares / charged particles), the electronic data processed by ELAC software can get corrupted — a bit-f...

The AI Decade: A Sobering Assessment of Three Futures (2025–2035)

# **The AI Decade: A Sobering Assessment of Three Futures (2025–2035)** *Why the next ten years will determine whether artificial intelligence delivers a productivity revolution, triggers a catastrophic bust, or—most likely—muddles through in messy, uneven waves* --- ## **Introduction: The Trillion-Dollar Question** Global capital markets are currently prosecuting the largest coordinated technology bet in human history. Between 2023 and 2025, corporations and governments committed over $500 billion to AI infrastructure, foundation models, and deployment—a figure projected to exceed $1 trillion cumulatively by 2028. This spending dwarfs the scale of previous technology build-outs, including the dotcom era and the initial cloud computing wave. Yet unlike past cycles, this investment is proceeding with a troubling combination of certainty and opacity. Corporate leaders speak with conviction about AI's transformative potential while simultaneously acknowledging—often in the same breath...

One-tailed and two-tailed p-values. When to use them in statistics

🔹 Definition P-value in statistics tells us how extreme our observed data is, assuming the null hypothesis (H₀) is true. The difference between one-tailed and two-tailed p-values depends on what kind of difference or effect we are testing for. 🔸 1. Two-Tailed Test You use a two-tailed test when you’re testing for any significant difference — either greater than or less than a certain value. ✅ Example: Suppose the average blood pressure (BP) in a population is 120 mmHg. You want to test whether a new drug changes BP (it could increase or decrease). Null hypothesis (H₀): μ = 120 Alternative hypothesis (H₁): μ ≠ 120 If your sample mean = 126, and the calculated p-value = 0.04 (two-tailed) , then this means there’s a 4% chance of observing such an extreme difference (either direction) if the true mean were really 120. If α = 0.05, since 0.04 < 0.05 → Reject H₀. Conclusion: The drug significantly changes BP (either up or down). 🔸 2. One-Taile...