Published Papers

This page contains my peer-reviewed publications in scientific and academic journals. I publish only through formal peer review. When it comes to scientific claims, work that has not survived independent scrutiny remains speculation regardless of its author. The standard is not whether the math appears correct, it is whether it can withstand challenge. That commitment to verification is what separates published science from conjecture.

ORCID:  https://orcid.org/0009-0002-5218-4772

Cosmic Microwave Background Without Expansion

Journal of Modern Physics, 17(1), 22–48 (2026)

New peer-reviewed results close two long-standing historical problems in cosmology. For nearly a century, the idea behind tired light and a non-expanding universe was considered dead. Fritz Zwicky proposed it in 1929, the notion that redshift could arise from light losing energy over distance rather than from space expanding. The idea was buried not because expansion had been directly proven, but because it was believed this energy loss would ruin the spectrum. That single objection killed static cosmologies for generations. That objection is now resolved. The original spectral failure no longer applies, which means static or non-expanding universes are once again viable at the background level where they were previously ruled out.

For over sixty years, the cosmic microwave background was treated as the final proof of expansion and the Big Bang. It was considered the crown jewel that settled the question. That status no longer holds. This paper shows the exact CMB blackbody spectrum remains intact without requiring expansion. Expansion is consistent with the CMB, but it is not required by it. This reduces the CMB from proof to sufficiency. For a century, entire classes of cosmological models were excluded not by contradiction with data, but by a single assumed necessity. Removing that necessity changes how evidence is weighed, how alternatives are evaluated, and where explanatory pressure actually belongs.

Black hole singularities and the limits of the spacetime continuum

European Physical Journal Plus, Springer Nature (2026)

When a massive star collapses, general relativity predicts all its matter compresses into a single point of infinite density called a singularity. This prediction has been central to black hole physics since Karl Schwarzschild solved Einstein's field equations in 1916. For over a century, physicists treated these infinities as real features of the theory rather than signs that the theory has broken down. But no physical material in any other branch of science is assumed to survive infinite compression. Steel yields. Bones fracture. Crystal lattices shatter. Every known substance has a point where it stops behaving as a continuum and breaks. Spacetime has been the one exception, not because anyone proved it could survive, but because no one formally asked the question. This paper asks it. It introduces a dimensionless invariant built from curvature scalars and a critical threshold. When that invariant reaches unity, spacetime is interpreted as failing, the same way a material fails when stress reaches its yield point. The singularity is not where physics ends. It is where the continuum description stops being valid. Every external prediction of general relativity, horizons, orbits, gravitational waves, remains completely unchanged.

This is historically significant because it is the first formal mechanical failure criterion for spacetime ever published. Since 1916, singularities have been treated as either physically real or as problems waiting for quantum gravity to solve. Penrose and Hawking proved in the 1960s that singularities are unavoidable under certain conditions, but never addressed what physically happens when one forms. Decades of work in limiting curvature gravity, elastic medium analogies, and phase transition models circled the same idea from different angles but none produced a single unifying failure condition. This paper does. It reframes 110 years of black hole physics by treating singularities not as places where reality breaks but as places where our description of reality breaks. The distinction is not philosophical. It is mechanical, quantitative, and now part of the peer-reviewed literature through Springer Nature.

Strong Equivalence Principle: Violations without Failure

Schrödinger: Journal of Physics Education, Vol. 6, No. 4, December 2025

Students are taught that the Strong Equivalence Principle is fundamental to general relativity, and from that teaching a natural assumption follows: if a theory violates SEP, it must fail the classical tests of gravity. That assumption is wrong. This paper introduces a compact pedagogical framework built on the parametrized post-Newtonian formalism showing that SEP is sufficient but not necessary for reproducing light deflection, Shapiro delay, and perihelion advance. Brans-Dicke theory with finite coupling provides the concrete counterexample, a theory that violates SEP while remaining fully consistent with solar system observations.

The distinction matters because it corrects a confusion that runs through introductory and advanced courses alike. Passing an experimental test and satisfying a theoretical principle are not the same thing, and conflating them distorts how students understand the relationship between theory and evidence. The paper provides classroom-ready diagrams, worked examples, and exercises that allow instructors to teach this separation directly. It is the first published framework designed specifically to help students distinguish sufficiency from necessity within gravitational theory.

The Constructibility Principle

Philosophy and Cosmology, Volume 36, 2026

Philosophy of cosmology has a demarcation problem. Models like eternal inflation, many-worlds interpretations, and anthropic reasoning are mathematically coherent and often sophisticated, but they place their central claims beyond the reach of any observation. Popper's falsifiability breaks down when a theory never touches testable ground. Lakatos offered research programmes but gave no answer for models that stay permanently outside of test. This paper introduces the Constructibility Principle as a practical filter: what cannot be built cannot be claimed. To build means to realize through logical or mathematical construction, symbolic or computational simulation, or operational and physical procedure. The principle does not ban speculation, it clarifies the standing of speculation by requiring a constructive path before a claim gains epistemic weight.

The principle is applied across philosophy, cosmology, and artificial intelligence through a four-gate framework: awareness, expression, reality test, and recursive closure. Eternal inflation without measurable surrogates fails. Many-worlds without constructive access to other branches fails. Anthropic reasoning without testable consequences reduces to tautology. The same filter distinguishes grounded computation from ungrounded machine outputs in large language models. The Constructibility Principle does not replace falsifiability or Lakatos or Dawid's non-empirical criteria. It sets the threshold that comes before all of them, because a claim without a constructive path cannot even be evaluated for adequacy. It is the first systematic demarcation tool designed to handle the unique methodological pressure of fields where direct empirical test is permanently out of reach.

The strange consolation of AI ethics

AI & Society - Curmudgeon Corner, Published: 08 November 2025

Big AI companies now talk a lot about feelings, care, and ethics. They suggest their systems can listen, comfort people, or even act like therapists. The problem is that this care is simulated. The systems do not actually understand or care, but the appearance of listening is treated as good enough. Ethics boards, reviews, and audits are announced to show responsibility, but they rarely stop products from launching or change real decisions. These steps look serious on the outside, yet they mostly exist to make people feel reassured while everything continues as planned.

This creates a dangerous pattern. When ethics becomes something that is performed instead of enforced, accountability disappears. Audits list problems but do not stop deployment. Reviews happen after decisions are already made. Over time, the process itself replaces real responsibility. People feel protected because the structures exist, even though those structures have no power. The result is a culture that prefers comfort over change, simulation over substance, and reassurance over real control. The risk is not just technical, but social, because when accountability turns into a performance, meaningful oversight fades and damage is only addressed after it is too late.

Einstein’s second postulate: sufficient but not necessary

International Journal of Mathematical Education in Science and Technology - Published online: 18 Mar 2026

Special relativity is usually introduced with the constancy of the speed of light as a starting point. This paper shows that the structure of the theory does not require that assumption. Beginning from symmetry principles alone, including relativity, homogeneity, isotropy, reciprocity, and closure under composition, the Lorentz transformations follow with an invariant speed parameter k. The familiar structure of spacetime kinematics emerges from these constraints without reference to light. The standard form of the theory appears once experiment identifies this parameter with the measured signal speed . This reframes the role of Einstein’s second postulate as sufficient to recover the theory, but not necessary for its logical construction.

This distinction matters because it resolves a long-standing point of confusion in physics education. Students often encounter the impression that the theory assumes what it is trying to prove, especially in discussions of clock synchronization and signal propagation. By separating the mathematical structure from the empirical identification, the framework removes that tension and presents relativity as a clear progression from symmetry to measurement. The result is a more transparent and modular way to teach the subject, where each step has a defined role. It strengthens conceptual understanding without changing the physical content of the theory, and it gives instructors a practical alternative for presenting relativity in a way that is both rigorous and easier for students to follow.