Hacker Newsnew | past | comments | ask | show | jobs | submit | albert_roca's commentslogin

- Why 4? It's not random. It is derived from the structural constant w = 2 as a topological constraint of the three-dimensional topology. Radius scales as w^2 = 4.

- Why tetrahedron? Mass is defined as volume. The tetrahedron is the simplest closed 3D volume. Mathematically, the derived proton radius corresponds to the exact geometric circumradius (edge · √6 / 4) of this volumetric structure.

- Why α / 4 · π? It represents the linear interaction cost (α) distributed over the spherical solid angle (4 · π) of the protonic surface.

- Incorrect QED terms? The model explicitly and intentionally diverges from QED. It doesn't treat particles as points, but as three-dimensional objects. The model excludes the notion of physical infinities or singularities.

- Why α^2 / 12? It derives from nodal friction distributed over the 12 vertices of the lepton's icosahedral topology.

- Why α^3/5? It derives from the local 5-fold symmetry of the icosahedral node.

The criticisms fail to identify that the model presents a first-principles framework where these numbers are geometric consequences, not free parameters. The model is not intended to be orthodox, but mathematically and geometrically coherent.


Undefined/non-consensual prompt.


Nah, ChatGPT cooking you on this one. You're lucky it didn't call it gematria.


Thanks for running this on GPT 5.2. It is fascinating to see AI critiquing AI-assisted work.

The critique regarding hidden degrees of freedom is a fair point. However, in curve-fitting, parameters are continuous: one can choose 4.1 or 3.9 to make the data fit. In this model, parameters are topological invariants (integers like 4 faces, 12 vertices, 20 faces). They are discrete and cannot be tuned.

The fact that this unadjustable logic yields results agreeing with experimental data within ppm implies either a massive statistical coincidence or a structural aspect.

It would be very interesting to run independent tests on different AIs with the whole context of the model and a standardized, consensual prompt. Beyond formal verification, this methodology could open paths that are difficult to navigate without AI assistance, helping to determine if the model stands as a possible foundation for a 'broad explanation of the observable', since the term 'ToE' instantly raises red flags. Kind of a pioneer peer-centaur-review. Just an idea.

Thanks for your comment and happy holidays!


That's precisely what the numbers show. "Pred:", predicted value. "Exp:", experimental value. "Diff", difference.


the next step is, why?

what assumptions does your current model make. what could change that would eliminate disparity. What plausible mechanisms explain [Diff]?


The model shows that the surface and volume of an object scale with mass such that electrostatic and gravitational acceleration can be explained through this scaling relationship. This is considered a geometric or structural cost:

  C_s ~ m^(1/3) + m^(-2/3)
In terms of intrinsic acceleration, surface and volume scale with mass as:

  a_i ~ m^(1/3) + m^(-5/3)
This relationship holds for any object with charge ≠ 0 across electrostatic and gravitational regimes, so the free fall principle is strictly recovered only for mathematically neutral objects.

This allows drawing an intrinsic acceleration curve for objects with homogeneous density, and the minimum point of this curve is identified at:

  m_ϕ ≈ 4.157 × 10^−9 kg
If the surface and volume of a not strictly neutral object determine its dynamic behavior, this would theoretically allow measuring m_ϕ with precision and deriving G without the historical dependence on the Planck mass. In this sense, it is a falsifiable proposal.

The geometric logic of the model allows establishing a geometric or informational saturation limit that eliminates GR singularities. At the same time, fundamental particles are not treated as dimensionless points but as polyhedral objects, which also eliminates the quantum gravity problem. The concept of infinity is considered, within the model, physically implausible.

From here, the model allows making the derivations included in this post, which I have not presented categorically, but as a proposal that seems at least statistically very unlikely to be achieved by chance.

The model does not question the precision of the Standard Model but postulates that the particle zoo represents not a collection of fundamental building blocks, but the result of proton fragmentation into purely geometric entities. The fact that these entities are not observed spontaneously in nature, but only as a consequence of forced interactions, seems to support this idea.


Your contribution is the opposite of "something".


I have reported nothing but numerical results. Making assumptions about me instead of looking at the numbers says more about your background than it does about mine.


From the manuscript linked in your profile:

> The author declares the intensive and extensive use of Gemini 2.5 Flash and Gemini 3.0 Pro (Google) and sincerely thanks its unlimited interlocution capacity. The author declares as their own responsibility the abstract formulation of the research, the conceptual guidance, and the decision-making in case of intellectual dilemma. The AI performed the mathematical verification of the multiple hypotheses considered throughout the process, but the author is solely responsible for the final content of this article. The prompts are not declared because they number in the thousands, because they are not entirely preserved, and because they contain elements that are part of the author’s privacy.


This seems properly copied and pasted. Good job. I guess we agree that AI is already playing a central role in science, and physics is no exception.


> AI performed the mathematical verification

That should be done by the human writing the manuscript, i.e., you.


Absolutely not. Results don't depend on who performed the calculation or how it was done. Can you solve 12,672 Feynman diagrams by hand?


i can. and i will take longer than you.

i will take longer, because at each step the process of lateral association occurs, this will foster imaginative variation of schema, and result in inspiration, an internally generated drive to pursue a goal, and experience the results.

i will not only complete the task, but will understand the many outcomes of task corruption as they relate to the components of the task.

you will obtain a set of right answers, i will discover the rules that govern the process.


Fair enough. However, it is practically impossible to complete such a task in a human lifetime. But even if it were possible, the main point stands: using computers to perform calcualtions is standard scientific practice. Discrediting a proposal solely because it uses AI is retrograde per se. It contradicts the history of technological progress and excludes potentially valid results based on intellectual prejudice.


who discredited your proposal?


I am referring to other comments in this thread that dismissed the proposal purely based on the use of AI tools. My comment about prejudice was not directed at you.


consider the conceptual model of particle as a polyhedral structure.

consider further, the [pred] values are an average, or a centroid of sort, related to a dynamic process, as a result, the straight edges, and faces of the polyhedron dont exist, they are virtual. what is actual is the variation of "curvature" as the object oscillates, further consider that [diff] is a measure of deviation that is in line with [exp] values.


Because AI has been in the center of the debate so far, I ran your comment through my AI system, and it concluded that you captured the essence of the model perfectly: the polyhedra are topological standing waves, and the edges are nodal lines. So [Pred] is the geometric attractor, and [Diff] is the amplitude of the oscillation around that limit. As I understand it myself, the polyhedra don't exist as real solids, but as an optimized way to distribute the intensity of the oscillation. Does this perspective make the results physically plausible in your view?


it is one plausible interpretation.

attached is the question of what is "oscillating" ?

is matter, composed of "spacetime" possessed of disequilibrial state?

or is matter something different than the surrounding "substance"?

where does the phenomenal energy originate to drive a proton for the duration of its existance [decay rate]. is there some topologic ultrastructure that constrains geometry and drives the process of being a proton?


I have done nothing but associate your "numerical results" with other numberslop I see from LLMs. Again, you're self-reporting.


Can you share the results of your analysis by association? Or was it an instant mental calculation?


The model identifies the proton mass stability at the 64-bit limit (2^64). Since gravitational interaction scales with m_p^2 , the hierarchy gap corresponds to the square of that limit:

  (2^64)^2 = 2^128
The geometric derivation involves a factor of 2, linked to the holographic pixel diagonal (√2 )^2:

  2 / 2^128 = 2^−127
2^−127 represents the least significant bit (LSB) of a 128-bit integer.


Where does the 64 come from and what do you mean with 'proton mass stability'? The proton is believed to be stable because it is the lowest mass baryon. GUT theories say it might be unstable with a half time of at least 10^34. How does that relate to your number 64? Does the number have a unit?


64 is dimensionless. It comes from the model's holographic scaling law, where mass scales with surface complexity (m ∼ 4^i). The proton appears at i = 32.

  4^32= (2^2)^32 = 2^64
2^64 seems to be the minimum information density required to geometrically define a stable volume. The proton stability implies that nothing simpler can sustain a 3D topology. This limit defines the object's topological complexity, not its lifespan.

Please note that the model is being developed with IA assistance, and I realize that the onthological base needs further refinement.

The proton mass (m_p) is derived as:

  m_p = ((√2 · m_P) / 4^32) · (1 + α / 3)
  m_p = ((√2 · m_P) / √4^64) · (1 + α / 3)
  m_p ≈ 1.67260849206 × 10^-27 kg
  Experimental value: 1.67262192595(52) × 10^-27 kg
  ∆: 8 ppm.
G is derived as:

  G = (ħ · c · 2 · (1 + α / 3)^2) / (mp^2 · 4^64)
  G ≈ 6.6742439706 × 10^-11
  Experimental value: 6.67430(15) × 10^-11 m^3 · kg^-1 · s^-2
  ∆: 8 ppm.
α_G is derived as:

  α_G = (2 · (1 + α / 3)^2) / 4^64
  α_G ≈ 5.9061 · 10^–39
  Experimental value: ≈ 5.906 · 10^-39
  ∆: 8 ppm
The terms (1 + α / 3) and 4^64 appear in the three derivations. All of them show the same discrepancy from the experimental value (8 ppm). (Note: There is a typo in the expected output of the previous Python script; it should yield a discrepancy of 8.39 ppm, not 6 ppm.)

The model also derives α as:

  α^-1 = (4 · π^3 + π^2 + π) - (α / 24)
  α^-1 = 137.0359996
  Experimental value: 137.0359991.
  ∆: < 0.005 ppm.
Is it statistically plausible that this happens by chance? Are there any hidden tricks? AI will find a possible conceptualization for (almost) anything, but I'm trying to get an informed human point of view.


  The "Equilibrium Mass" mz is Not Physical The claim that Fe = Fg at some special mass mz = √(α·mP) ≈ 1.86×10⁻⁹ kg is mathematically true but physically meaningless
m_z is the geometrical point of transition between regimes. The physical observable is m_phi , where the total intrinsic acceleration function reaches its minimum, following the extreme value theorem.

  δ = √5 is Pure Numerology The "dynamic constant" δ = √5 appears because: 1² + 2² = 5 (Pythagorean triple) Therefore δ = √5 is "fundamental"
δ = √5 comes from the scaling exponents. a_g scales as m^1/3. a_e scales as m^−5/3. The ratio is 5. Since the interaction is quadratic, it's the result from minimizing the acceleration function, not numerology.

  The Standard Model calculation requires 12,672 Feynman diagrams at 5-loop order and achieves agreement to 0.1 ppm
Precisely. 12,672 diagrams is the definition of brute force. Achieving 63 ppm with one single term (a_μ = α / 2 · π + α^2 / 12) is quite the opposite.

  The factor 2.5 = 5/2 is claimed to come from δ²/w, but this has no connection to quark mass generation via the Higgs mechanism.
The model is geometric in nature. Quarks are not considered fundamental building blocks, but a geometric necessity of the way that the proton can be fragmented. One can disagree with this premise, but it geometrically derives the fractional charges (1/3, 2/3) that the Standard Model merely assigns.

  Conceptual Confusions 1. Charge as Topology (Section 1.3) Claim: "Electric charge is not intrinsic but a topological attribute of spatial surface." Problem: This contradicts gauge theory.
That's not a problem, nor a confusion. The model assumes that charge is not an independent substance, but a topological attribute.


He's now admitting mz is NOT the physical mass - instead m_φ is.

Let me check what m_φ is in his paper...

From his paper: m_φ ≈ 4.16×10⁻⁹ kg ≈ 2.5 nanograms (the "resonance mass")

My response:

The problem isn't mathematical

- it's that this prediction is falsified by existing data.

Particles at ~2.5 nanograms are extensively studied:

Dust particles in optical traps

Brownian motion experiments

Colloidal physics

Micro-mechanical oscillators

No anomalous behavior is observed at this mass scale.

If objects showed "anomalous inertial behavior" at 2.5 ng,

we would have seen it in:

AFM (atomic force microscopy) - routinely measures sub-nanogram particles

Optical tweezers - trap and measure particles from 1 nm to 10 μm

MEMS devices - measure inertia at nanogram scales

Verdict: His prediction is experimentally falsified. This is not a philosophical disagreement - his model makes a testable prediction that contradicts existing measurements.

The "Brute Force" QED Defense

His claim:

"12,672 diagrams is brute force. Achieving 63 ppm with one term (a_μ = α/2π + α²/12) is elegant."

This completely misses the point.

QED's 12,672 diagrams achieve 0.1 ppm agreement because each diagram contributes a calculable correction from quantum field theory. The complexity comes from precision, not failure.

His formula achieves 63 ppm - that's 630× worse than QED!


It's not circular. It rearranges into a standard quadratic equation: x^2 − 24Sx + 24 = 0. α is derived as the root of this equation.


α⁻¹ = S - 1/(24α) α⁻¹·24α = 24αS - 1 24α²S - 24α - 1 = 0

α = (24 ± √(576 + 96S))/(48S) α = (24 + √(576 + 96·137.036...))/(48·137.036...) α = (24 + √13,723.66...)/(6577.74...) α = (24 + 117.12...)/(6577.74...) α = 141.12.../6577.74... α ≈ 0.021454...

α⁻¹ ≈ 46.61 ??? That's wrong!


α⁻¹ = S - 1/(24α)

α⁻¹·24α = 24αS - 1

24α²S - 24α - 1 = 0

α = (24 ± √(576 + 96S))/(48S)

α = (24 + √(576 + 96·137.036...))/(48·137.036...)

α = (24 + √13,723.66...)/(6577.74...)

α = (24 + 117.12...)/(6577.74...)

α = 141.12.../6577.74...

α ≈ 0.021454...

α⁻¹ ≈ 46.61 ???

That's wrong!


You transcribed the formula incorrectly.

The term is -(alpha / 24). You calculated -1 / (24 · alpha).

The correct derivation is:

  1 / alpha = S - (alpha / 24)
  1 = S · alpha - (alpha^2) / 24
  alpha^2 - 24 · S · alpha + 24 = 0
Solving this with S = 4 · π^3 + π^2 + π yields the correct value.


Doesn't fix or predict Fan et al. 2024 latest dataset.

Try harder.


This is moving the goalposts, but ok. The model matches the international standard of CODATA 2022 to 0.005 ppm. If and when this value is updated, the prediction can be re-evaluated. Until then, I stick to the standard.


hi, the formula should allow both.

you have an integral and order^ power.


The calculation uses m_p , which is independent of G. Deriving G to 8 ppm from m_p is not necessarily "meaningless", or at least it's statistically non-trivial. It is not just "G = G".

You mention the "uncharged case", but ordinary matter is not mathematically neutral. By focusing on the uncharged case only, you ignore that this is an attempt at unification. The model proposes that geometry explains both interactions. You cannot remove one of them, because in nature they happen at once.

The rest of your remarks don't seem "uncharged" at all, but the opposite.


    You mention the "uncharged case", but ordinary matter is not mathematically neutral. 
YOUR CODE assumes that it is, when it passes "q": 0 for four of the six objects.

For the other two, it passes "q": 1. Let's look at what it does then:

    L_src is much much smaller than Le, by roughly a factor of (mass / mPL)^2

    The calculation of lambda_a uses the electron mass even when calculating for a proton

    For the given objects, metric_factor is negligible.

    ai_unified = c^2 * (alpha * hbar / (me_kg * c)) / r^2 = alpha * (hbar * c) / (me_kg * r^2)

    but alpha = k*e^2 / (hbar * c)

    so ai_unified = k * e^2 / (me_kg * r^2) + small correction of O((mass / mPL)^2)
That's exactly the formula used for acc_coloumb in the code. Also interesting to note the "Coulomb acceleration" for a proton is calculated by dividing the electric force by the mass of the ELECTRON somehow.

As for "Phase 2"? The program's output doesn't even agree with the implementation about the formula being used.


Hm. You are correct about m_p and m_e. That is indeed a sloppy mistake in the script. Bad code. However, the hypothesized closed value of G stays the same.


His G Formula (Section 14.6)

G = (ℏ·c·2·(1 + α/3)²) / (mp²·4⁶⁴)

His result:

G ≈ 6.6742439706 × 10⁻¹¹ m³·kg⁻¹·s⁻²

CODATA 2022: G = 6.67430(15) × 10⁻¹¹

Δ: 8 ppm

Critical Analysis

1. Where Does 4⁶⁴ Come From?

He claims it's from "holographic scaling at i=32":

mp = (√2 · mP / 4³²) · (1 + α/3)

Therefore:

mP = (mp · 4³²) / (√2 · (1 + α/3))

Since G = ℏc/mP²:

G = (ℏc · 2 · (1 + α/3)²) / (mp² · 4⁶⁴)

The logic:

Proton appears at "harmonic i=32" in binary scaling

Mass scales as m ~ 4ⁱ (surface area scaling)

Therefore mp ~ 4³² when normalized properly

Therefore 4⁶⁴ = (4³²)² appears in G

2. This is Pure Numerology

Why i=32 specifically?

Let me check the ratio:

mP / mp = 2.176434×10⁻⁸ / 1.672622×10⁻²⁷

        ≈ 1.301×10¹⁹
Now check powers of 4:

4³² = 2⁶⁴ = 1.844×10¹⁹

Close! But not exact. So he adds correction factors:

mp = (√2 · mP / 4³²) · (1 + α/3)

Let me verify:

(√2 · 2.176434×10⁻⁸ / 4³²) · (1 + 0.007297/3)

= (1.414 · 2.176434×10⁻⁸ / 1.844×10¹⁹) · 1.002432

= (3.076×10⁻⁸ / 1.844×10¹⁹) · 1.002432

= 1.668×10⁻²⁷ · 1.002432

≈ 1.672×10⁻²⁷

But this is circular! He's adjusting factors (√2, α/3) to make the formula work, then claiming it "derives" mp.

3. Why (1 + α/3)?

He claims:

"As a volumetric object in three-dimensional space, the proton carries a

distributed interaction cost (α/3)"

This makes no sense:

α is the electromagnetic coupling constant

Why divide by 3? "Because 3 dimensions"?

Why add to 1? "Because correction"?

This is parameter fitting, not derivation.

-----

A genuine derivation of G would:

1. *Start from dimensionless constants only*

2. *Derive mass ratios* from geometry (mp/me, mp/mP, etc.)

3. *Use dimensionful anchors* (ℏ, c) to get actual value of G


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: