Contents^
Items^
I Applied Wavelet Transforms to AI and Found Hidden Structure
I've been working on resolving key contradictions in AI through structured emergence, a principle that so far appears to govern both physical and computational systems.
My grandfather was a prolific inventor in organic chemistry (GE plastics post WWII) and was reading his papers thinking about "chirality" - directional asymmetric oscillating waves and how they might apply to AI. I found his work deeply inspiring.
I ran 7 empirical studies using publicly available datasets across prime series, fMRI, DNA sequences, galaxy clustering, baryon acoustic oscillations, redshift distributions, and AI performance metrics.
All 7 studies have confirmed internal coherence with my framework. While that's promising, I still need to continue to valid the results (attached output on primes captures localized frequency variations, ideal for detaching scale-dependent structure in primes i.e. Ulam Spirals - attached).
To analyze these datasets, I applied continuous wavelet transformations (Morlet/Chirality) using Python3, revealing structured oscillations that suggest underlying coherence in expansion and emergent system behavior.
Paper here: https://lnkd.in/gfigPgRx
If true, here are the implications:
1. AI performance gains – applying structured emergence methods has yielded noticeable improvements in AI adaptability and optimization. 2. Empirical validation across domains – The same structured oscillations appear in biological, physical, and computational systems—indicating a deeper principle at work. 3. Strong early engagement – while the paper is still under review, 160 views and 130 downloads (81% conversion) in 7 days on Zenodo put it in the top 1%+ of all academic papers—not as an ego metric, but as an early signal of potential validation.
The same mathematical structures that define wavelet transforms and prime distributions seems to provide a pathway to more efficient AI architectures by:
1. Replacing brute-force heuristics with recursive intelligence scaling 2. Enhancing feature extraction through structured frequency adaptation 3. Leveraging emergent chirality to resolve complex optimization bottlenecks
Technical (for AI engineers): 1. Wavelet-Driven Neural Networks – replacing static Fourier embeddings with adaptive wavelet transforms to improve feature localization. Fourier was failing hence pivot to CWT. Ulam Spirals showed non-random hence CWT. 2. Prime-Structured Optimization – using structured emergent primes to improve loss function convergence and network pruning. 3. Recursive Model Adaptation – implementing dynamic architectural restructuring based on coherence detection rather than gradient-based back-propagation alone.
The theory could be wrong, but the empirical results are simply too coherent not to share in case useful for anyone.
"The Chirality of Dynamic Emergent Systems (CODES): A Unified Framework for Cosmology, Quantum Mechanics, and Relativity" (2025) https://zenodo.org/records/14799070
Hey, chirality! /? Hnlog chiral https://westurner.github.io/hnlog/
> loss function
Yesterday on HN: Harmonic Loss instead of Cross-Entropy; https://news.ycombinator.com/item?id=42941393
> Fourier was failing hence
What about QFT Quantum Fourier transform? https://en.wikipedia.org/wiki/Quantum_Fourier_transform
Harmonic analysis involves the Fourier transform: https://en.wikipedia.org/wiki/Harmonic_analysis
> Recursive Model Adaptation
"Parameter-free" networks
Graph rewriting, AtomSpace
> feature localization
Hilbert curves cluster features; https://en.wikipedia.org/wiki/Hilbert_curve :
> Moreover, there are several possible generalizations of Hilbert curves to higher dimensions
Re: Relativity and the CODES paper;
/? fedi: https://news.ycombinator.com/item?id=42376759 , https://news.ycombinator.com/item?id=38061551
> Fedi's SQR Superfluid Quantum Relativity (.it), FWIU: also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter.
Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
> structured emergent primes to improve loss function convergence and network pruning
Products of primes modulo prime for set membership testing; is it faster? Even with a long list of primes?
Hey! Appreciate the links—some definitely interesting parallels, but what I’m outlining moves beyond existing QFT/Hilbert curve applications.
The key distinction = structured emergent primes are demonstrating internal coherence across vastly different domains (prime gaps, fMRI, DNA, galaxy clustering), suggesting a deeper non-random structure influencing AI optimization.
Curious if you’ve explored wavelet-driven loss functions replacing cross-entropy? Fourier struggled with localization, but CWT and chirality-based structuring seem to resolve this.
Your thoughts here?
I do not have experience with wavelet-driven loss functions.
Do structured emergent primes afford insight into n-body fluid+gravity dynamics and superfluid (condensate) dynamics at deep space and stellar thermal ranges?
How do wavelets model curl and n-body vortices?
What do I remember about wavelets, without reading the article? Wavelets are or aren't analogous to neurons. Wavelets discretize. Am I confusing wavelets and autoencoders? Are wavelets like tiles or compression symbol tables?
How do wavelet-driven loss functions differ from other loss functions like Cross-Entropy and Harmonic Loss?
How does prime emergence relate to harmonics and [Fourier,] convolution with and without superposition?
Other seemingly relevant things:
- particle with mass only when moving in certain directions; re: chirality
- "NASA: Mystery of Life's Handedness Deepens" (2024-11) https://news.ycombinator.com/item?id=42229953 :
> ScholarlyArticle: "Amplification of electromagnetic fields by a rotating body" (2024) https://www.nature.com/articles/s41467-024-49689-w
>>> Could this be used as an engine of some kind?
>> What about helical polarization?
> If there is locomotion due to a dynamic between handed molecules and, say, helically polarized fields; is such handedness a survival selector for life in deep space?
> Are chiral molecules more likely to land on earth?
>> "Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
Hey - really appreciate the detailed questions—these are exactly the kinds of connections I’ve been exploring. Sub components:
Wavelet-driven loss functions vs. Cross-Entropy/Harmonic Loss You’re right about wavelets discretizing—it’s what makes them a better fit than Fourier for adaptive structuring. The key distinction is that wavelets localize both frequency and time dynamically, meaning loss functions can become context-sensitive rather than purely probabilistic. This resolves issues with information localization in AI training, allowing emergent structure rather than brute-force heuristics.
Prime emergence, harmonics, and convolution (Fourier vs. CWT) Structured primes seem to encode hidden periodicities across systems—prime gaps, biological sequences, cosmic structures, etc. • Fourier struggled because it assumes a globally uniform basis set. • CWT resolves this by detecting frequency-dependent structures (chirality-based). • Example: Prime number distributions align with Ulam Spirals, which match observed redshift distributions in deep space clustering. The coherence suggests an underlying structuring force, and phase-locking principles seem to emerge naturally.
N-body vortex dynamics, superfluidity, and chiral molecules in deep space You might be onto something here. The connection between: • Superfluid dynamics in deep space • Chiral molecules preferring certain gravitational dynamics • Handedness affecting locomotion in polarized fields suggests chirality might be an overlooked factor in cosmic structure formation (i.e., why galaxies tend to form spiral structures).
Could this be an engine? (Electromagnetic rotation and helicity) Possibly. If structured emergence scales across these domains, it’s possible that chirality-induced resonance fields could drive a new form of energy extraction—similar to the electroweak interaction asymmetry seen in beta decay.
The idea that chirality acts as a selector for deep-space survival is interesting. Do you think the preference for left-handed amino acids on Earth could be a consequence of an early chiral field bias? If so, does that imply a fundamental symmetry-breaking event at planetary formation?
Show HN: PulseBeam – Simplify WebRTC by Staying Serverless
WebRTC’s capabilities are amazing, but the setup headaches (signaling, connection/ICE failures, patchwork docs) can kill momentum. That’s why we built PulseBeam—a batteries-included WebRTC platform designed for developers who just want real-time features to work. What’s different? Built-in Signaling Built-in TURN Time limited JWT auth (serverless for production or use our endpoint for testing) Client and server SDKs included Free and open-source core If you’ve used libraries like PeerJS, PulseBeam should feel like home. We’re inspired by its simplicity. We’re currently in a developer-preview stage. We provide free signaling like PeerJS, and TURN up to 1GB. Of course, feel free to roast us
jupyter-collaboration is built on Y Documents (y.js, pycrdt, jupyter_ydoc,) https://github.com/jupyterlab/jupyter-collaboration
There is a y.js WebRTC adapter, but jupyter-collaboration doesn't have WebRTC data or audio or video support AFAIU.
y-webrtc: https://github.com/yjs/y-webrtc
Is there an example of how to do CRDT with PulseBeam WebRTC?
With client and serverside data validation?
> JWT
Is there OIDC support on the roadmap?
E.g. Google supports OIDC: https://developers.google.com/identity/openid-connect/openid...
W3C DIDs, VC Verifiable Credentials, and Blockcerts are designed for decentralization.
STUN, TURN, and ICE are NAT traversal workarounds FWIU; though NAT traversal isn't necessary if the client knowingly or unknowingly has an interface with a public IPV6 address due to IPV6 prefix delegation?
We don't have an example of CRDT with PulseBeam yet. But, CRDT itself is just a data structure, so you can use PulseBeam to communicate the sync ops (full or delta) with a data channel. Then, you can either use y.js or other CRDT libraries to manage the merging.
Yes, the plan is to use JWT for both the client and server side.
OIDC is not on the roadmap yet. But, I've been tinkering on the side related to this. I think something like an OIDC mapper to PulseBeam JWT can work here.
I'm not envisioning integrating into a decentralization ecosystem at this point. The scope is to provide a reliable service for 1:1 and small groups for other developers to build on top with centralization. So, something like Analytics, global segmented signaling (allow close peers to connect with edge servers, but allow them to connect to remote servers as well), authentication, and more network topology support.
That's correct, if the client is reachable by a public IPv6 (meaning the other peer has to also have a way to talk to an IPv6), then STUN and TURN are not needed. ICE is still needed but only used lightly for checking and selecting the candidate pair connections.
Revolutionizing software testing: Introducing LLM-powered bug catchers
Can this unit test generation capability be connected to the models listed on the SWE-bench [Multimodal] leaderboard?
I'm currently working on running an agent through SWE-Bench (RA.Aid).
What do you mean by connecting the test generation capability to it?
Do you mean generating new eval test cases? I think it could potentially have a use there.
ScholarlyArticle: "Mutation-Guided LLM-based Test Generation at Meta" (2025) https://arxiv.org/abs/2501.12862v1
Good call, thanks for linking the research paper directly here.
OpenWISP: Multi-device fleet management for OpenWrt routers
Anything similar for opnsense (besides their own service) or pfsense?
Maybe just go with ansible or similar: https://github.com/ansibleguy/collection_opnsense
Updating a fleet of embedded devices like routers (which can come online and go offline at any time) will generally be much easier using a pull-based update model. But if you’ve got control over the build and update lifecycle, a push-based approach like ansible might be appropriate.
Maybe I am missing somehing, but I would assume that base network infrastructure like routers, firewalls and switches have a higher uptime, availability and reliability than ordinary servers.
The problem with push is that the service sitting at the center needs to figure out which devices will need to be re-pushed later on. You can end up with a lot of state that needs action just to get things back to normal.
So if you can convince devices to pull at boot time and then regularly thereafter, you know that the three states they can be in are down, good, or soon to be good. Now you only need to take action when things are down.
Never analyze distribution of software and config based on the perfect state; minimize the amount of work you need to do for the exceptions.
Unattended upgrades fail and sit there requiring manual intervention (due to lack of transactional updates and/or multiple flash slots (root partitions and bootloader configuration)).
Pull style configuration requires the device to hold credentials in order to authorize access to download the new policy set.
It's possible to add an /etc/init.d that runs sysupgrade on boot, install Python and Ansible, configure and confirm remote logging, and then run `ansible-pull`.
ansible-openwrt eliminates the need to have Python on a device: https://github.com/gekmihesg/ansible-openwrt
But then log collection; unless all of the nodes have correctly configured log forwarding at each stage of firmware upgrade, pull-style configuration management will lose logs that push-style configuration management can easily centrally log.
Pull based updates would work on OpenWRT devices if they had enough storage, transactional updates and/or multiple flash slots, and scheduled maintenance windows.
OpenWRT wiki > Sysupgrade: https://openwrt.org/docs/techref/sysupgrade
Calculating Pi in 5 lines of Python
> Infinite series can't really be calculated to completion using a computer,
The sum of an infinite divergent series cannot be calculated with or without a computer.
The sum of an infinite convergent series can be calculated with:
1/(a-r)
Sequence > Limits and convergence:
https://en.wikipedia.org/wiki/Sequence#Limits_and_convergenc...Limit of a sequence: https://en.wikipedia.org/wiki/Limit_of_a_sequence
SymPy docs > Limits of Sequences: https://docs.sympy.org/latest/modules/series/limitseq.html
> Provides methods to compute limit of terms having sequences at infinity.
Madhava-Leibniz formula for π: https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80
Eco-friendly artificial muscle fibers can produce and store energy
> The team utilized poly(lactic acid) (PLA), an eco-friendly material derived from crop-based raw materials, and highly durable bio-based thermoplastic polyurethane (TPU) to develop the artificial muscle fibers that mimic the functional and real muscles.
"Energy harvesting and storage using highly durable Biomass-Based artificial muscle fibers via shape memory effect" (2025) https://www.sciencedirect.com/science/article/abs/pii/S13858...
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
>> 583 Wh/kg
But just graphene presumably doesn't work in these applications due to lack of tensilility like certain natural fibers?
Harmonic Loss Trains Interpretable AI Models
"Harmonic Loss Trains Interpretable AI Models" (2025) https://arxiv.org/abs/2502.01628
Src: https://github.com/KindXiaoming/grow-crystals :
> What is Harmonic Loss?
Cross Entropy: https://en.wikipedia.org/wiki/Cross-entropy
XAI: Explainable AI > Interpretability: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Right to explanation: https://en.wikipedia.org/wiki/Right_to_explanation
Government planned it 7 years, beavers built a dam in 2 days and saved $1M
Oracle justified its JavaScript trademark with Node.js–now it wants that ignored
Calling EMCAScript JavaScript was a huge mistake that is still biting us.
OR alternatively, just not coming up with a better alternative name that ECMAScript. If there was a catchier alternative name that was less awkward to pronounce, people might more happily have switched over.
"JS" because of the .js file extension.
ECMAScript version history: https://en.wikipedia.org/wiki/ECMAScript_version_history
"Java" is an island in Indonesia associated with coffee beans from the Dutch East Indies that Sun Microsystems named their portable software after.
Coffee production in Indonesia: https://en.wikipedia.org/wiki/Coffee_production_in_Indonesia... :
> Certain estates age a portion of their coffee for up to five years, normally in large burlap sacks, which are regularly aired, dusted, and flipped.
Build your own SQLite, Part 4: reading tables metadata
It's interesting to compare this series to the actual source code of sqlite. For example, sqlite uses a LALR parser generator: https://github.com/sqlite/sqlite/blob/master/src/parse.y#L19...
And queries itself to get the schema: https://github.com/sqlite/sqlite/blob/802b042f6ef89285bc0e72...
Lots of questions, but the main one is whether we have made any progress with these new toolchains and programming languages w/ respect to performance or robustness. And that may be unfair to ask of what is a genuinely useful tutorial.
If you don’t know it already, you’ll probably be interested in limbo: https://github.com/tursodatabase/limbo
It’s much more ambitious/complete than the db presented in the tutorial.
If memory serves me correctly, it uses the same parser generator as SQLite, which may answer some your questions.
Is translation necessary to port the complete SQLite test suite?
sqlite/sqlite//test: https://github.com/sqlite/sqlite/tree/master/test
tursodatabase/limbo//testing: https://github.com/tursodatabase/limbo/tree/main/testing
ArXiv LaTeX Cleaner: Clean the LaTeX code of your paper to submit to ArXiv
It's really a pity that they do this now. Some of their older papers had actually quite some valuable information, comments, discussions, thoughts, even commented out sections, figures, tables in it. It gave a much better view on how the paper was written over time, or how even the work processed over time. Sometimes you also see some alternative titles being discussed, which can be quite funny.
E.g. from https://arxiv.org/abs/1804.09849:
%\title{Sequence-to-Sequence Tricks and Hybrids\\for Improved Neural Machine Translation} % \title{Mixing and Matching Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Analyzing and Optimizing Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Frankenmodels for Improved Neural Machine Translation} % \title{Optimized Architectures and Training Strategies\\for Improved Neural Machine Translation} % \title{Hybrid Vigor: Combining Traits from Different Architectures Improves Neural Machine Translation}
\title{The Best of Both Worlds: \\Combining Recent Advances in Neural Machine Translation\\ ~}
Also a lot of things in the Attention is all you need paper: https://arxiv.org/abs/1706.03762v1
Maybe papers need to be put under version control.
Quantum Bayesian Inference with Renormalization for Gravitational Waves
ScholarlyArticle: "Quantum Bayesian Inference with Renormalization for Gravitational Waves" (2025) https://iopscience.iop.org/article/10.3847/2041-8213/ada6ae
NewsArticle: "Black Holes Speak in Gravitational Waves, Heard Through Quantum Walks" (2025) https://thequantuminsider.com/2025/01/29/black-holes-speak-i... :
> Unlike classical MCMC, which requires a large number of iterative steps to converge on a solution, QBIRD uses a quantum-enhanced Metropolis algorithm that incorporates quantum walks to explore the parameter space more efficiently. Instead of sequentially evaluating probability distributions one step at a time, QBIRD encodes the likelihood landscape into a quantum Hilbert space, allowing it to assess multiple transitions between parameter states simultaneously. This is achieved through a set of quantum registers that track state evolution, transition probabilities, and acceptance criteria using a modified Metropolis-Hastings rule.
> Additionally, QBIRD incorporates renormalization and downsampling, which progressively refine the search space by eliminating less probable regions and concentrating computational resources on the most likely solutions. These techniques enable QBIRD to achieve accuracy comparable to classical MCMC while reducing the number of required samples and computational overhead, making it a more promising approach for gravitational wave parameter estimation as quantum hardware matures.
Parameter estimation algorithms:
"Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 .. https://news.ycombinator.com/item?id=40396171
"Robustly learning Hamiltonian dynamics of a superconducting quantum processor" (2024) https://www.nature.com/articles/s41467-024-52629-3 .. https://news.ycombinator.com/item?id=42086445
Can Large Language Models Emulate Judicial Decision-Making? [Paper]
An actor can emulate the communication style of judicial decision language, sure.
But the cost of a wrong answer (wrongful conviction) exceeds a threshold of ethical use.
> We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.
From "Asking 60 LLMs a set of 20 questions" https://news.ycombinator.com/item?id=37451642 :
> From https://news.ycombinator.com/item?id=36038440 :
>> Awesome-legal-nlp links to benchmarks like LexGLUE and FairLex but not yet LegalBench; in re: AI alignment and ethics / regional law
>> A "who hath done it" exercise
>> "For each of these things, tell me whether Gdo, Others, or You did it"
AI should never be judge, jury, and executioner.
Homotopy Type Theory
type theory notes: https://news.ycombinator.com/item?id=42440016#42444882
HoTT in Lean 4: https://github.com/forked-from-1kasper/ground_zero
I Wrote a WebAssembly VM in C
This is great! The WebAssembly Core Specification is actually quite readable, although some of the language can be a bit intimidating if you're not used to reading programming language papers.
If anyone is looking for a slightly more accessible way to learn WebAssembly, you might enjoy WebAssembly from the Ground Up: https://wasmgroundup.com
(Disclaimer: I'm one of the authors)
I know one of WebAssembly's biggest features by design is security / "sandbox".
But I've always gotten confused with... it is secure because by default it can't do much.
I don't quite understand how to view WebAssembly. You write in one language, it compiles things like basic math (nothing with network or filesystem) to another and it runs in an interpreter.
I feel like I have a severe lack/misunderstanding. There's a ton of hype for years, lots of investment... but it isn't like any case where you want to add Lua to an app you can add WebAssembly/vice versa?
WebAssembly can communicate through buffers. WebAssembly can also import foreign functions (Javascript functions in the browser).
You can get output by reading the buffer at the end of execution/when receiving callbacks. So, for instance, you pass a few frames worth of buffers to WASM, WASM renders pixels into the buffers, calls a callback, and the Javascript reads data from the buffer (sending it to a <canvas> or similar).
The benefit of WASM is that it can't be very malicious by itself. It requires the runtime to provide it with exported functions and callbacks to do any file I/O, network I/O, or spawning new tasks. Lua and similar tools can go deep into the runtime they exist in, altering system state and messing with system memory if they want to, while WASM can only interact with the specific API surface you provide it.
That makes WASM less powerful, but more predictable, and in my opinion better for building integrations with as there is no risk of internal APIs being accessed (that you will be blamed for if they break in an update).
WASI Preview 1 and WASI Preview 2 can do file and network I/O IIUC.
Re: tty support in container2wasm and fixed 80x25 due to lack of SIGWINCH support in WASI Preview 1: https://github.com/ktock/container2wasm/issues/146
The File System Access API requires granting each app access to each folder.
jupyterlab-filesystem-access only works with Chromium based browsers, because FF doesn't support the File System Access API: https://github.com/jupyterlab-contrib/jupyterlab-filesystem-...
The File System Access API is useful for opening a local .ipynb and .csv with JupyterLite, which builds CPython for WASM as Pyodide.
There is a "Direct Sockets API in Chrome 131" but not in FF; so WebRTC and WebSocket relaying is unnecessary for WASM apps like WebVM: https://news.ycombinator.com/item?id=42029188
WASI Preview 2: https://github.com/WebAssembly/WASI/blob/main/wasip2/README.... :
> wasi-io, wasi-clocks, wasi-random, wasi-filesystem, wasi-sockets, wasi-cli, wasi-http
US bill proposes jail time for people who download DeepSeek
That would disincentive this type of research, for example:
"DeepSeek's Hidden Bias: How We Cut It by 76% Without Performance Loss" (2025) https://news.ycombinator.com/item?id=42868271
https://news.ycombinator.com/item?id=42891042
TIL about BBQ: Bias Benchmark for QA
"BBQ: A Hand-Built Bias Benchmark for Question Answering" (2021) https://arxiv.org/abs/2110.08193
Waydroid – Android in a Linux container
Surprised to see this on the frontpage - it's a well known piece of software.
It's unfortunate that there are no Google-vended images (e.g. the generic system image) that run on Waydroid. Typing my password into random ROMs from the internet sketches me out.
I wouldn't say it runs a "random ROM from the internet" - LineageOS is a very well-established project and is fully FOSS (free and open source software) except for firmware necessary for specific devices. It is the natural choice for any project, such as Waydroid, that requires good community support and ongoing availability.
Over a number of years, Google have progressively removed many of the original parts of AOSP (the FOSS foundation upon which Android is based), which means that alternative components have to be developed by projects like LineageOS. In spite of this, I suspect that LineageOS makes fewer modifications to AOSP than most phone vendors do, including Google themselves!
A few things that seem like they're consistently missing from these projects: Hardware 3d acceleration from the host in a version of OpenGL ES + Vulkan that most phones have natively. Lastly, many apps have built-in ways of detecting that they're not running on a phone and ditch out (looking at cpuinfo and referencing that with the purported device being run).
It also seems that expected arm support on device is increasing (along with expected throughput) and that the capability of the x86 host device you need to emulate even a modest modern mobile ARM soc is getting higher and higher.
Lastly, the android version supported is almost always 3-4 generations behind the current Android. Apps are quick to drop legacy android support or run with fewer features/less optimizations on older versions of the OS. The android base version in this project is from 2020.
Anecdotally, using bluestacks (which indisputably has the most compatible and optimized emulation stack in the entire space) with a 7800X3D / RTX 3090 still runs most games slower than a snapdragon 8 phone from yesteryear running natively.
virtio-gpu rutabaga was recently added to QEMU IIUC mostly by Google for Chromebook Android emulation or Android Studio or both?
virtio-gpu-rutabaga: https://www.qemu.org/docs/master/system/devices/virtio-gpu.h...
Rutabaga Virtual Graphics Interface: https://crosvm.dev/book/appendix/rutabaga_gfx.html
gfxstream: https://android.googlesource.com/platform/hardware/google/gf...
"Gfxstream Merged Into Mesa For Vulkan Virtualization" (2024-09) https://www.phoronix.com/news/Mesa-Gfxstream-Merged
I don't understand why there is not an official x86 container / ROM for Android development? Do CI builds of Android apps not run tests with recent versions of Android? How do CI builds of APKs run GUI tests without an Android container?
There is no official support for x86 in android any more - the Android-x86 project was the last I know that supported/maintained it. Last release was 2022.
For apps that use Vulkan natively, it's easy - but many still use and rely on OpenGL ES. It's a weird scenario where you have apps that are now supporting Vulkan, but they have higher minimum OS requirements as a result... but those versions of Android aren't supported by these type of projects.
37D boundary of quantum correlations with a time-domain optical processor
"Exploring the boundary of quantum correlations with a time-domain optical processor" (2025) https://www.science.org/doi/10.1126/sciadv.abd8080 .. https://arxiv.org/abs/2208.07794v3 :
> Abstract: Contextuality is a hallmark feature of the quantum theory that captures its incompatibility with any noncontextual hidden-variable model. The Greenberger--Horne--Zeilinger (GHZ)-type paradoxes are proofs of contextuality that reveal this incompatibility with deterministic logical arguments. However, the GHZ-type paradox whose events can be included in the fewest contexts and which brings the strongest nonclassicality remains elusive. Here, we derive a GHZ-type paradox with a context-cover number of three and show this number saturates the lower bound posed by quantum theory. We demonstrate the paradox with a time-domain fiber optical platform and recover the quantum prediction in a 37-dimensional setup based on high-speed modulation, convolution, and homodyne detection of time-multiplexed pulsed coherent light. By proposing and studying a strong form of contextuality in high-dimensional Hilbert space, our results pave the way for the exploration of exotic quantum correlations with time-multiplexed optical systems.
New thermogalvanic tech paves way for more efficient fridges
"Solvation entropy engineering of thermogalvanic electrolytes for efficient electrochemical refrigeration" (2025) https://www.cell.com/joule/fulltext/S2542-4351(25)00003-0
Thanks for that, it's great that its 10x better than the previous best effort. It's notable that it needs to get 20x better again before it starts to have useful applications :-).
Someday maybe!
Quantum thermal diodes: https://news.ycombinator.com/item?id=42537703
From https://news.ycombinator.com/item?id=38861468 :
> Laser cooling: https://en.wikipedia.org/wiki/Laser_cooling
> Cooling with LEDs in reverse: https://issuu.com/designinglighting/docs/dec_2022/s/17923182 :
> "Near-field photonic cooling through control of the chemical potential of photons" (2019) https://www.nature.com/articles/s41586-019-0918-8
High hopes, low expectations :-). That said, 'quantum thermal diode' sounds like something that breaks on your space ship and if you don't fix it you won't be able to get the engines online to outrun the aliens. Even after reading the article I think the unit of measure there should be milli-demons in honor of Maxwell's Demon.
Emergence of a second law of thermodynamics in isolated quantum systems
ScholarlyArticle: "Emergence of a Second Law of Thermodynamics in Isolated Quantum Systems" (2025) https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan...
NewsArticle: "Even Quantum Physics Obeys the Law of Entropy" https://www.tuwien.at/en/tu-wien/news/news-articles/news/auc...
NewsArticle: "Sacred laws of entropy also work in the quantum world, suggests study" ... "90-year-old assumption about quantum entropy challenged in new study" https://interestingengineering.com/science/entropy-also-work...
Polarization-dependent photoluminescence of Ce-implanted MgO and MgAl2O4
NewsArticle: "Scientists explore how to make quantum bits with spinel gemstones" (2025) https://news.uchicago.edu/story/scientists-explore-how-make-... :
> A type of gemstone called spinel can be used to store quantum information, according to new research from a collaboration involving University of Chicago, Tohoku University, and Argonne National Laboratory.
3D scene reconstruction in adverse weather conditions via Gaussian splatting
Large Language Models for Mathematicians (2023)
It makes sense for LLMs to work with testable code for symbolic mathematics; CAS Computer Algebra System code instead of LaTeX which only roughly corresponds.
Are LLMs training on the AST parses of the symbolic expressions, or token coocurrence? What about training on the relations between code and tests?
Benchmarks for math and physics LLMs: FrontierMath, TheoremQA, Multi SWE-bench: https://news.ycombinator.com/item?id=42097683
Large language models think too fast to explore effectively
Maps well to Kahneman's "Thinking Fast and Slow" framework
system 1 thinking for early layer processing of uncertainty in LLMs. quick, intuitive decisions, focuses on uncertainty, happens in early transformer layers.
system 2 thinking for later layer processing of empowerment (selecting elements that maximize future possibilities). strategic, deliberate evaluation, considering long-term possibilities, happens in later layers.
system 1 = 4o/llama 3.1
system 1 + system 2 = o1/r1 reasoning models
empowerment calculation seems possibly oversimplified - assumes a static value for elements over a dynamic context-dependent empowerment
interesting that higher temperatures improved performance slightly for system 1 models although they still made decisions before empowerment information could influence them
edit: removed the word "novel". The paper shows early-layer processing of uncertainty vs later-layer processing of empowerment.
Stanovich proposes a three tier model http://keithstanovich.com/Site/Research_on_Reasoning_files/S...
Modeling an analog system like human cognition into any number of discrete tiers is inherently kind of arbitrary and unscientific. I doubt you could ever prove experimentally that all human thinking works through exactly two or three or ten or whatever number of tiers. But it's at least possible that a three-tier model facilitates building AI software which is "good enough" for many practical use cases.
Funny you should say that because an American guy did that a hundred years ago and nailed it.
He divided reasoning into the two categories corollarial and theorematic.
Charles Sanders Peirce > Pragmatism > Theory of inquiry > Scientific Method: https://en.wikipedia.org/wiki/Charles_Sanders_Peirce#Scienti...
Peirce’s Deductive Logic: https://plato.stanford.edu/entries/peirce-logic/
Scientific Method > 2. Historical Review: Aristotle to Mill https://plato.stanford.edu/entries/scientific-method/#HisRev...
Scientific Method: https://en.wikipedia.org/wiki/Scientific_method
Reproducibility: https://en.wikipedia.org/wiki/Reproducibility
Replication crisis: https://en.wikipedia.org/wiki/Replication_crisis
TFS > Replication crisis https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
"Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35879007 :
Lateralization of brain function: https://en.wikipedia.org/wiki/Lateralization_of_brain_functi...
Ask HN: Percent of employees that benefit financially from equity offers?
Title says it all. Equity offers are a very common thing in tech. I don't personally know anyone who has made money from equity offers, though nearly all my colleagues have received them at some point.
Does anyone have real data on how many employees actually see financial upside from equity grants? Are there studies or even anecdotal numbers on how common it is for non-executives/non-founders to walk away with any money? Specifically talking about privately held US startups.
From https://news.ycombinator.com/item?id=29141796 :
> There are a number of options/equity calculators:
> https://tldroptions.io/ ("~65% of companies will never exit", "~15% of companies will have low exits", "~20% of companies will make you money")
Ultra-fast picosecond real-time observation of optical quantum entanglement
"Real-time observation of picosecond-timescale optical quantum entanglement towards ultrafast quantum information processing" (2025) https://www.nature.com/articles/s41566-024-01589-7 .. https://arxiv.org/abs/2403.07357v1 (2024) :
> Abstract: Entanglement is a fundamental resource for various optical quantum information processing (QIP) applications. To achieve high-speed QIP systems, entanglement should be encoded in short wavepackets. Here we report the real-time observation of ultrafast optical Einstein–Podolsky–Rosen correlation at a picosecond timescale in a continuous-wave system. Optical phase-sensitive amplification using a 6-THz-bandwidth waveguide-based optical parametric amplifier enhances the effective efficiency of 70-GHz-bandwidth homodyne detectors, mainly used in 5G telecommunication, enabling its use in real-time quantum state measurement. Although power measurement using frequency scanning, such as an optical spectrum analyser, is not performed in real time, our observation is demonstrated through the real-time amplitude measurement and can be directly used in QIP applications. The observed Einstein–Podolsky–Rosen states show quantum correlation of 4.5 dB below the shot-noise level encoded in wavepackets with 40 ps period, equivalent to 25 GHz repetition—103 times faster than previous entanglement observation in continuous-wave systems. The quantum correlation of 4.5 dB is already sufficient for several QIP applications, and our system can be readily extended to large-scale entanglement. Moreover, our scheme has high compatibility with optical communication technology such as wavelength-division multiplexing, and femtosecond-timescale observation is also feasible. Our demonstration is a paradigm shift in accelerating accessible quantum correlation—the foundational resource of all quantum applications—from the nanosecond to picosecond timescales, enabling ultrafast optical QIP.
Recipe Database with Semantic Search on Digital Ocean's Smallest VM
Datasette-lite supports SQLite in WASM in a browser.
DuckDB WASM would also solve for a recipe database without an application server, for example in order to reduce annual hosting costs of a side project.
Is the scraped data CC-BY-SA licensed? Attribution would be good.
/? datasette vector search
FAISS
WASM vector database: https://www.google.com/search?q=WASM+vector+database
Moiré-driven topological electronic crystals in twisted graphene
ScholarlyArticle: "Moiré-driven topological electronic crystals in twisted graphene" (2025) https://www.nature.com/articles/s41586-024-08239-6
NewsArticle: "Anomalous Hall crystal made from twisted graphene" (2025) https://physicsworld.com/a/anomalous-hall-crystal-made-from-...
Adding concurrent read/write to DuckDB with Arrow Flight
Just sanity checking here - with flight write streams to duckdb, I'm guessing there is no notion of transactional boundary here, so if we want data consistency during reads, that's another level of manual app responsibilities? And atomicity is there, but at the single record batch or row group level?
Ex: if we have a streaming financial ledger as 2 tables, that is 2 writes, and a reader might see an inconsistent state of only 1 write
Ex: streaming ledger as one table, and the credit+debit split into 2 distanced rowgroups, same inconsistency?
Ex: in both cases, we might have the server stream back an ack of what was written, so we could at least get a guarantee of which timestamps are fully written for future reads, and queries can manually limit to known-complete intervals
We are looking at adding streaming writes to GFQL, an open source columnar (arrow-native) CPU/GPU graph query language, where this is the same scenario: appends mean updating both the nodes table and the edges table
Yes, reading this post (working around a database's concurrency control) made me raise an eyebrow. If you are ok with inconsistent data then that's fine. Or if you handle consistency at a higher level that's fine too. But if either of these are the case why would you be going through DuckDB? You could write out Parquet files directly?
cosmos/iavl is a Merkleized AVL tree.
https://github.com/cosmos/iavl :
> Merkleized IAVL+ Tree implementation in Go
> The purpose of this data structure is to provide persistent storage for key-value pairs (say to store account balances) such that a deterministic merkle root hash can be computed. The tree is balanced using a variant of the AVL algorithm so all operations are O(log(n)).
Integer Vector clock or Merkle hashes?
Why shouldn't you store account balances in git, for example?
Or, why shouldn't you append to Parquet or Feather and LZ4 for strongly consistent transactional data?
Centralized databases can have Merkle hashes, too;
"How Postgres stores data on disk" https://news.ycombinator.com/item?id=41163785 :
> Those systems index Parquet. Can they also index Feather IPC, which an application might already have to journal and/or log, and checkpoint?
DLT applications for strong transactional consistency sign and synchronize block messages and transaction messages.
Public blockchains have average transaction times and costs.
Private blockchains also have TPS Transactions Per Second metrics, and unknown degrees of off-site redundancy for consistent storage with or without indexes.
Blockchain#Openness: https://en.wikipedia.org/wiki/Blockchain#Openness :
> An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain. [46][47][48][49][50] Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases. [51] Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain. [52]
> Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision. [46][48] Nikolai Hampton of Computerworld said that "many in-house blockchain solutions will be nothing more than cumbersome databases," and "without a clear security model, proprietary blockchains should be eyed with suspicion." [10][53]
Merkle Town: https://news.ycombinator.com/item?id=38829274 :
> How CT works > "How CT fits into the wider Web PKI ecosystem": https://certificate.transparency.dev/howctworks/
From "PostgreSQL Support for Certificate Transparency Logs Now Available" https://news.ycombinator.com/item?id=42628223 :
> Are there Merkle hashes between the rows in the PostgreSQL CT store like there are in the Trillian CT store?
> Sigstore Rekor also has centralized Merkle hashes.
I think you replied in the wrong post.
No, I just explained how the world does strongly consistent distributed databases for transactional data, which is the exact question here.
DuckDB does not yet handle strong consistency. Blockchains and SQL databases do.
Blockchains are a fantastic way to run things slowly ;-) More seriously: Making crypto fast does sound like a fun technical challenge, but well beyond what our finance/gov/cyber/ai etc customers want us to do.
For reference, our goal here is to run around 1 TB/s per server, and many times more when a beefier server. Same tech just landed at spot #3 on the graph 500 on its first try.
To go even bigger & faster, we are looking for ~phd intern fellows to run on more than one server, if that's your thing: OSS GPU AI fellowship @ https://www.graphistry.com/careers
The flight perspective aligns with what we're doing. We skip the duckdb CPU indirections (why drink through a long twirly straw?) and go straight to arrow on GPU RAM. For our other work, if duckdb does gives reasonable transactional guarantees here, that's interesting... hence my (in earnest) original question. AFAICT, the answers are resting on operational answers & docs that don't connect to how we normally talk about databases giving you answers on consistent vs inconsistent views of data.
Do you think that blockchain engineers are incapable of developing high throughout distributed systems due to engineering incapacity or due to real limits to how fast a strongly consistent, sufficiently secured cryptographic distributed system can be? Are blockchain devs all just idiots, or have they dumbly prioritized data integrity because that doesn't matter it's about big data these days, nobody needs CAP?
From "Rediscovering Transaction Processing from History and First Principles" https://news.ycombinator.com/item?id=41064634 :
> metrics: Real-Time TPS (tx/s), Max Recorded TPS (tx/s), Max Theoretical TPS (tx/s), Block Time (s), Finality (s)
> Other metrics: FLOPS, FLOPS/WHr, TOPS, TOPS/WHr, $/OPS/WHr
TB/s in query processing of data already in RAM?
/? TB/s "hnlog"
- https://news.ycombinator.com/item?id=40423020 , [...] :
> The HBM3E Wikipedia article says 1.2TB/s.
> Latest PCIe 7 x16 says 512 GB/s:
fiber optics: 301 TB/s (2024-05)
Cerebras: https://en.wikipedia.org/wiki/Cerebras :
WSE-2 on-chip SRAM memory bandwidth: 20 PB/s / 220 PB/S
WSE-3: 21 PB/S
HBM > Technology: https://en.wikipedia.org/wiki/High_Bandwidth_Memory#Technolo... :
HBM3E: 9.8 Gbit/s , 1229 Gbyte/s (2023)
HBM4: 6.4 Gbit/s , 1638 Gbyte/s (2026)
LPDDR SDRAM > Generations: https://en.wikipedia.org/wiki/LPDDR#Generations :
LPDDR5X: 1,066.63 MB/S (2021)
GDDR7: https://en.m.wikipedia.org/wiki/GDDR7_SDRAM
GDDR7: 32 Gbps/pin - 48 Gbps/pin,[11] and chip capacities up to 64 Gbit, 192 GB/s
List of interface bit rates: https://en.wikipedia.org/wiki/List_of_interface_bit_rates :
PCIe7 x16: 1.936 Tbit/s 242 GB/s (2025)
800GBASE-X: 800 Gbps (2024)
DDR5-8800: 70.4 GB/s
Bit rate > In data communications: https://en.wikipedia.org/wiki/Bit_rate# In_data_communications ; Gross and Net bit rate, Information rate, Network throughout, Goodput
Re: TPUs, NPUs, TOPS: https://news.ycombinator.com/item?id=42318274 :
> How many TOPS/W and TFLOPS/W? (T [Float] Operations Per Second per Watt (hour ?))*
Top 500 > Green 500: https://www.top500.org/lists/green500/2024/11/ :
PFlop/s (Rmax)
Power (kW)
GFlops/watts (Energy Efficiency)
Performance per watt > FLOPS/watts: https://en.wikipedia.org/wiki/Performance_per_watt#FLOPS_per...
Electrons: 50%–99% of c the speed of light ( Speed of electricity: https://en.wikipedia.org/wiki/Speed_of_electricity , Velocity factor of a CAT-7 cable: https://en.wikipedia.org/wiki/Velocity_factor#Typical_veloci... )
Photons: c (*)
Gravitational Waves: Even though both light and gravitational waves were generated by this event, and they both travel at the same speed, the gravitational waves stopped arriving 1.7 seconds before the first light was seen ( https://bigthink.com/starts-with-a-bang/light-gravitational-... )
But people don't do computation with gravitational waves.
To a reasonable rounding error.. yes
How would you recommend that appends to Parquet files be distributedly synchronized with zero trust?
Raft, Paxos, BFT, ... /? hnlog paxos ... this about "50 years later, is two-phase locking the best we can do?" https://news.ycombinator.com/item?id=37712506
To have consensus about protocol revisions; To have data integrity and consensus about the merged sequence of data in database {rows, documents, named graphs, records,}.
"We're building a new static type checker for Python"
Could it please also do runtime type checking?
(PyContracts and iContract do runtime type checking, but it's not very performant.)
That MyPy isn't usable at runtime causes lots of re-work.
Have you tried beartype? It's worked well for me and has the least overhead of any other runtime type checker.
I think TypeGuard (https://github.com/agronholm/typeguard) also does runtime type checking. I use beartype BTW.
pycontracts: https://github.com/AlexandruBurlacu/pycontracts
icontract: https://github.com/Parquery/icontract
The DbC Design-by-Contract patterns supported by icontract probably have code quality returns beyond saving work.
Safety critical coding guidelines specify that there must be runtime type and value checks at the top of every function.
Gravitational Communication: Fundamentals, State-of-the-Art and Future Vision
"Communicating with Gravitational Waves" https://www.universetoday.com/170685/communicating-with-grav... :
> What’s promising about gravitational wave communication (GWC) is that it could overcome these challenges. GWC is robust in extreme environments and loses minimal energy over extremely long distances. It also overcomes problems that plague electromagnetic communication (EMC), like diffusion, distortion, and reflection. There’s also the intriguing possibility of harnessing naturally created GWs, which means reducing the energy needed to create them.
Like backscatter with gravitational waves?
Re Gravitational Wave detectors: https://news.ycombinator.com/item?id=41632710 :
>> Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
ScholarlyArticle: "Gravitational Communication: Fundamentals, State-of-the-Art and Future Vision" (2024) https://arxiv.org/abs/2501.03251
Tapping into the natural aromatic potential of microbial lignin valorization
Transparent wood composite > Delignification process: https://en.wikipedia.org/wiki/Transparent_wood_composite#Del... :
> The production of transparent wood from the delignification process vary study by study. However, the basics behind it are as follows: a wood sample is drenched in heated (80 °C–100 °C) solutions containing sodium chloride, sodium hypochlorite, or sodium hydroxide/sulfite for about 3–12 hours followed by immersion in boiling hydrogen peroxide.[15] Then, the lignin is separated from the cellulose and hemicellulose structure, turning the wood white and allowing the resin penetration to start. Finally, the sample is immersed in a matching resin, usually PMMA, under high temperatures (85 °C) and a vacuum for 12 hours.[15] This process fills the space previously occupied by the lignin and the open wood cellular structure resulting in the final transparent wood composite.
How could transparent wood production methods be sustainably scaled?
Is the lignin extracted in transparent wood production usable for valorization?
"Lignin valorization: Status, challenges and opportunities" (2022) https://www.sciencedirect.com/science/article/abs/pii/S09608... :
> Most of the research on lignin valorization has been done on lignin from pulp and paper industries (Bruijnincx et al., 2015, Reshmy et al., 2022) The advantage of using lignin from those facilities is that the resource is already centralized and the transportation costs to further process are significantly less
Freezing CPU intensive background tabs in Chrome
How could browsers let app developers know that their app requires excessive resources?
From https://news.ycombinator.com/item?id=40861851 :
>> - [ ] UBY: Browsers: Strobe the tab or extension button when it's beyond (configurable) resource usage thresholds
>> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs according to their relative resource utilization
>> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU,
X/Freedesktop.org Encounters New Cloud Crisis: Needs New Infrastructure
> Equinix had been sponsoring three AMD EPYC 7402P servers and another three dual Intel Xeon Silver 4214 servers for running the FreeDesktop.org GitLab cluster. Plus for GitLab runners there are three AMD EPYC 7502P servers and two Ampere Altra 80-core servers.
> The FreeDesktop.org GitLab burns through around 50TB of bandwidth per month.
Is there an estimate of the monthly and annual costs?
List of display servers: https://en.wikipedia.org/wiki/List_of_display_servers
Who's profiting from X/Freedesktop.org?
(Screenkey, xrandr, and xgamma don't work on Wayland)
Sommelier is Wayland based.
XQuartz hasn't had a release since 2023 and doesn't yet support HiDPI (so X apps on Mac are line-doubled). Shouldn't they be merging fixes?
DOT rips up US fuel efficiency regulations [pdf]
What is the AQI where they live?
And a pipeline through my f heart.
From https://en.wikipedia.org/wiki/United_States_offshore_drillin... :
> In 2018, a new federal initiative to expand offshore drilling suddenly excluded Florida, but although this would be favored by Floridians, concerns remained about the basis for that apparently arbitrary exception being merely politically motivated and tentative. No scientific, military, or economic basis for the decision was given, provoking continuing public concern in Florida.[11]
Why not Florida?
> In 2023, President Biden signed a Memorandum of March 13, 2023 prohibiting oil and gas leasing in certain arctic areas of the Outer Continental Shelf (Withdrawal of Certain Areas off the United States Arctic Coast of the Outer Continental Shelf from Oil or Gas Leasing). However, in January 2025
From https://coast.noaa.gov/states/fast-facts/economics-and-demog... :
> Coastal counties of the U.S. are home to 129 million people, or almost 40 percent of the nation's total population
Are we going to protect other states from this, too?
Servers and data centers could use 30% less energy with a simple Linux update
"Kernel vs. User-Level Networking: Don't Throw Out the Stack with the Interrupts" (2022) https://dl.acm.org/doi/abs/10.1145/3626780 :
> Abstract: This paper reviews the performance characteristics of network stack processing for communication-heavy server applications. Recent literature often describes kernel-bypass and user-level networking as a silver bullet to attain substantial performance improvements, but without providing a comprehensive understanding of how exactly these improvements come about. We identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead. While IRQs and their handling have a substantial impact on the effectiveness of the processor pipeline and thereby the overall processing efficiency, their overhead is difficult to measure directly when serving demanding workloads. This paper presents an indirect methodology to assess IRQ overhead by constructing preliminary approaches to reduce the impact of IRQs. While these approaches are not suitable for general deployment, their corresponding performance observations indirectly confirm the conjecture. Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput without compromising tail latency. In case of server applications, such as web servers or Memcached, the resulting performance is comparable to using kernel-bypass and user-level networking when using stacks with similar functionality and flexibility.
Concept cells help your brain abstract information and build memories
skos:Concept RDFS Class: https://www.w3.org/TR/skos-reference/#concepts
schema:Thing: https://schema.org/Thing
atomspace:ConceptNode: https://wiki.opencog.org/w/Atom_types .. https://github.com/opencog/atomspace#examples-documentation-...
SKOS Simple Knowledge Organization System > Concepts, ConceptScheme: https://en.wikipedia.org/wiki/Simple_Knowledge_Organization_...
But temporal instability observed in repeat functional imaging studies indicates that functional localization constant: the regions of the brain that activate for a given cue vary over time.
From https://news.ycombinator.com/item?id=42091934 :
> "Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
>> Future work should characterize drift across brain regions, cell types, and learning.
The important part about the statements in the drift paper are the qualifiers:
> Cells whose activity was previously correlated with environmental and behavioral variables are most frequently no longer active in response to the same variables weeks later. At the same time, a mostly new pool of neurons develops activity patterns correlated with these variables.
“Most frequently” and “mostly new” —- this means that some neurons still fire across the weeks-long periods for the same activities, leaving plenty of potential space for concept cells.
This doesn’t necessarily mean concept cells exist, but it does allow for the possibility of their existence.
I also didn’t check which regions of the brain were evaluated in each concept, as it is likely they have some different characteristics at the neuron level.
So there's more stability in the electrovolt wave function of the brain than in the cellular activation pathways?
I am not sure what an electrovolt is, or why this question follows from what I said? All I said was all neurons don’t seem to switch their activation stimuli, only “many.”
Show HN: Design/build of some parametric speaker cabinets with OpenSCAD
> As the design was fully parametric, I could change a single variable to move to a floorstanding design.
How far are we in 2025 from defining a Differentiable design, setting a target (e.g. maximally flat frequency response for a certain speaker placement in a certain room) and solving this automatically?
A basic speaker can't do a lot to improve "room acoustics" [1], particularly below the Schroeder frequency where room modes greatly affect the bass response. From my experience, it's the bass that needs fixing in room, because even a "perfect speaker" will get boomy/muddy at some frequencies (ie. reflections overlapping constructively), and thin/null at others (ie. reflections overlapping destructively). And if you aren't going to worry about "fixing the room", then there are already companies/products like Kali Audio IN-5 speakers [2] that have squeezed in some good performance and tech (eg. active DSP with coaxial driver) in to an affordable package.
There are ways to improve the bass situation, and one is by implementing "directional bass", aka a Cardioid array. This could be achieved with multiple individual speakers (add-on/upgrade existing system), or could be integrated in to 1 "speaker". Examples of the latter have been around for a few years - see Dutch & Dutch 8c or Kii Three. They are relatively expensive (which you could consider an early adopter tax), but affordable competition is starting to mature with speakers such as Mesanovic CDM65 [3].
There is another way to improve things too, and that is via "Active Room Treatment" [4] as Dirac calls it. Basically it uses excess capability in various speakers to "clean up" the audio of other speakers in the system by outputting "cancellation waves" (to cancel the problems). The results appear amazing, but they are taking their sweet time getting it released on to affordable equipment.
There's also "spatial audio" like Dolby Atmos that should/could work around room problems in a similar way to Dirac ART. So good speakers (like already exist) + ART + Atmos + AI "upscaled" 2 channel source music could be the final frontier? But that's just for "mechanical" sound reproduction. Maybe in the future I can just transmit the song straight in to my brain, bypassing my ears and the need for speakers entirely?!
[1] https://en.wikipedia.org/wiki/Room_acoustics [2] https://www.kaliaudio.com/independence [3] https://www.audiosciencereview.com/forum/index.php?threads/m... [4] https://www.dirac.com/live/dirac-live-active-room-treatment/
If you want to DIY something similar, check out the LXmin and other Linkwitz designs
In this video, they explain that the bandpass port for letting air out of typically the back of speakers is essential for acoustic transmission including bass; and they made an anechoic chamber.
"World's Second Best Speakers!" https://youtube.com/watch?v=EEh01PX-q9I&
TechIngedients also has a video about attaching $6 bass kickers to XPS foam to make flat panel speakers about as good as expensive bookshelf speakers.
"World’s Best Speakers!" https://youtube.com/watch?v=CKIye4RZ-5k&
Tool touted as 'first AI software engineer' is bad at its job, testers claim
Just blogspam rehash of https://www.answer.ai/posts/2025-01-08-devin.html#what-is-de...
[Multi-] SWE-bench Leaderboards:
SWE-bench: https://www.swebench.com/ :
> SWE-bench is a dataset that tests systems' ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python repositories. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
Multi-SWE-bench: A Multi-Lingual and Multi-Modal GitHub Issue Resolving Benchmark: https://multi-swe-bench.github.io/
Show HN: Stratoshark, a sibling application to Wireshark
Hi all, I'm excited to announce Stratoshark, a sibling application to Wireshark that lets you capture and analyze process activity (system calls) and log messages in the same way that Wireshark lets you capture and analyze network packets. If you would like to try it out you can download installers for Windows and macOS and source code for all platforms at https://stratoshark.org.
AMA: I'm the goofball whose name is at the top of the "About" box in both applications, and I'll be happy to answer any questions you might have.
Re: custom fields in pcap traces and retis https://github.com/retis-org/retis
MIT Unveils New Robot Insect, Paving the Way Toward Rise of Robotic Pollinators
Is there robo-beekeeping with or without humanoid robots?
/? Robo beekeeping: https://www.google.com/search?q=robo+beekeeping
Beekeeping: https://en.wikipedia.org/wiki/Beekeeping
FWIU bees like clover (and dandelions are their spring food source), which we typically kill with broadleaf herbicide for lawncare.
From https://news.ycombinator.com/item?id=38158625 :
> Is it possible to create a lawn weed killer (a broadleaf herbicide) that doesn't kill white dutch clover; because bees eat clover (and dandelions) and bees are essential?"
> [ Dandelion rubber is a sustainable alternative to microplastic tires ]
Pesticide toxicity to bees: https://en.wikipedia.org/wiki/Pesticide_toxicity_to_bees :
> Pesticides, especially neonicotinoids, have been investigated in relation to risks for bees such as Colony Collapse Disorder. A 2018 review by the European Food Safety Authority (EFSA) concluded that most uses of neonicotinoid pesticides such as clothianidin represent a risk to wild bees and honeybees. [5][6] Neonicotinoids have been banned for all outdoor use in the entire European Union since 2018, but has a conditional approval in the U.S. and other parts of the world, where it is widely used. [7][8]
TIL dish soap kills wasps, yellow jackets, hornets nearly on contact.
From https://savethebee.org/garden-weeds-bees-love/ :
> Many are beneficial, like dandelions, milkweed, clover, goldenrod and nettle, for bees and other pollinators.
Show HN: Pytest-evals – Simple LLM apps evaluation using pytest
The pytest-evals README mentions that it's built on pytest-harvest, which works with pytest-xdist and pytest-asyncio.
pytest-harvest: https://smarie.github.io/python-pytest-harvest/ :
> Store data created during your pytest tests execution, and retrieve it at the end of the session, e.g. for applicative benchmarking purposes
Yeah, pytest-harvest is a pretty cool plugin.
Originally I had a (very large and unfriendly) conftest file, but it was quite challenging to collaborate with other team members and was quite repetitive. So I wrapped it as a plugin, added some more functionalities and thats it.
This plugin wraps some boilerplate code in a way that is easy to use specially for the eval use-case. It’s minimalistic by design. Nothing big or fancy
Laser technique measures distances with nanometre precision
> optical frequency comb
"113 km absolute ranging with nanometer precision" (2024) https://arxiv.org/abs/2412.05542 :
> two-way dual-comb ranging (TWDCR) approach
> The advanced long-distance ranging technology is expected to have immediate implications for space research initiatives, such as the space telescope array and the satellite gravimetry
Note that the precision is very good, however the accuracy is nowhere near as close (fractions of a metre) in the atmosphere, due to the variable refractive index of air. Long term averages can help, of course.
They talk about this technique for ranging between satellites, which wouldn't have to deal with atmospheric conditions.
For ranging in an atmosphere they suggest dual-comb spectroscopy or two-color methods to account for the atmospheric changes.
From https://news.ycombinator.com/item?id=42330179 .. https://www.notebookcheck.net/University-of-Tokyo-researcher... :
> multispectral [UV + IR] camera ... optical, non-contact method to detect [blood pressure, hypertension, blood glucose levels, and diabetes]." [ for "Non-Contact Biometric System for Early Detection of Hypertension and Diabetes" https://www.ahajournals.org/doi/10.1161/circ.150.suppl_1.413... ]
>> "Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
>>> new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). [ and XGboost ]
Show HN: Terraform Provider for Inexpensive Switches
Hi HN,
I’ve been building this provider for (web managed) network switches manufactured by HRUI. These switches often used in SMBs, home labs, and by budget-conscious enthusiasts. Many HRUI switches are also rebranded and sold under various OEM/ODM names (eg. Horaco, XikeStor, keepLiNK, Sodola, etc) making them accessible/popular but often overlooked in the world of infrastructure automation.
The provider is in pre-release, and I’m looking for owners of these switches to test it and share feedback. My goal is to make it easier to automate its config using Terraform/OpenTofu :)
You can use this provider to configure VLANs, port settings, trunk/link aggregation etc.
I built this provider to address the lack of automation tools for budget-friendly hardware. It leverage goquery and has an internal SDK sitting between the Terraform resources and the switch Web UI.
If you have one of these switches, I’d love for you to give it a try and let me know how it works for you!
Terraform Registry: https://registry.terraform.io/providers/brennoo/hrui
OpenTofu Provider: https://search.opentofu.org/provider/brennoo/hrui
I’m happy to answer any questions about the provider or the hardware it supports. Feedback, bug reports, and ideas for improvement are more than welcome!Does any one know of switches similar to this but might be loadable with Linux? Maybe able to run with switchdev or similar?
OpenWRT > Table of Hardware > Switches: https://openwrt.org/toh/views/switches
ansible-openwrt: https://github.com/gekmihesg/ansible-openwrt
/? terraform OpenWRT: https://www.google.com/search?q=terraform+openwrt
/? terraform Open vSwitch: https://www.google.com/search?q=open+vswitch+terraform
Open vSwitch supports OpenFlow: https://en.wikipedia.org/wiki/Open_vSwitch
Open vSwitch > "Porting Open vSwitch to New Software or Hardware" https://docs.openvswitch.org/en/latest/topics/porting/
Optimizing Jupyter Notebooks for LLMs
Jupyter + LLM tools: Ipython-GPT, Elyra, jetmlgpt, jupyter-ai; CoCalc, Colab, NotebookLM,
jupyterlab/jupyter-ai: https://github.com/jupyterlab/jupyter-ai
"[jupyter/enhancement-proposals#128] Pre-proposal: standardize object representations for ai and a protocol to retrieve them" https://github.com/jupyter/enhancement-proposals/issues/128
Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?
How can new and existing firefighting aircraft, land craft, seacraft, and other capabilities help fight fire; in California at present and against the new climate enemy?
# Next-Gen Firefighting Aircraft (working title: NGFFA)
## Roadmap
- [ ] Brainstorm challenges and solutions; new sustainable approaches and time-tested methods
- [ ] Write a grant spec
- [ ] Write a legislative program funding request
## Challenges
- Challenge: Safety; Risk to personnel and civilians
- Challenge: Diving into fire is dangerous for the human aircraft operator
- Challenge: Vorticity due to fire heat; fire CFD (Computational Fluid Dynamics)
-- The verticals that updraft and downdraft off a fire are dangerous for pilots and expensive, non-compostable aircraft.
- Challenge: Water drifts away before it hits the ground due to the release altitude, wind, and fire air currents
- Challenge: Water shortage
- Challenge: Water lift aircraft shortage
- Task: Pour thousands of gallons (or Canadian litres) of water onto fire
- Task: Pour a line, a circle, or other patterns to contain fire
- Task: Process seawater quickly enough to avoid property damage with untreated ocean salt water, for example
https://www.google.com/search?q=fluid+vorticity+fire+plane
## Solutions
- Are Quadcopters (or n-copters) more stable than helicopters or planes in high-wind fire conditions?
- Fire CFD modeling for craft design.
- Light, durable aerospace-grade hydrogen vessels
- Intermodally transportable containers
- Are there stackable container-sized water vessels for freight transport?
- Floating, towable seawater processing and loading capability.
-- Process seawater for: domestic firefighting, disaster relief,
-- Fill vessels at sea.
- "Starlite" as a flame retardant; https://en.wikipedia.org/wiki/Starlite
- starlite youtuber #Replication FWIU: [ cornstarch, flour, sugar, borax ] https://en.wikipedia.org/wiki/Starlite#Replication
- Notes re: "xPrize Wildfire – $11M Prize Competition" (2023) https://news.ycombinator.com/item?id=35658214
- Hydrogels, Aerogels
- (Hemp) Aerogels absorb oil better than treated polyurethane foam and hair.
- EV fire blanket, industry cartridge spec
- Non-flammable batteries; unpressurized Sodium Ion, Proton batteries, Twisted Carbon Nanotube batteries
- Compostable batteries
- Cloud seeding and firefighting
## Reference material
- Aerial firefighting: https://en.wikipedia.org/wiki/Aerial_firefighting
- Aerial firefighting > Comparison table of fixed-wing, firefighting tanker airplanes: https://en.wikipedia.org/wiki/Aerial_firefighting#Comparison...
Imaging Group and Phase Velocities of THz Surface Plasmon Polaritons in Graphene
ScholarlyArticle: "Spacetime Imaging of Group and Phase Velocities of Terahertz Surface Plasmon Polaritons in Graphene" (2024) https://pubs.acs.org/doi/10.1021/acs.nanolett.4c04615
NewsArticle: "Scientists observe and control ultrafast surface waves on graphene" (2024) https://phys.org/news/2025-01-scientists-ultrafast-surface-g...
Reversible computing escapes the lab
Nice, these ideas have been around for a long time but never commercialized to my knowledge. I've done some experiments in this area with simulations and am currently designing some test circuitry to be fabbed via Tiny Tapeout.
Reversibility isn't actually necessary for most of the energy savings. It saves you an extra maybe 20% beyond what adiabatic techniques can do on their own. Reason being, the energy of the information itself pales in comparison to the resistive losses which dominate the losses in adiabatic circuits, and it's actually a (device-dependent) portion of these resistive losses which the reversible aspect helps to recover, not the energy of information itself.
I'm curious why Frank chose to go with a resonance-based power-clock, instead of a switched-capacitor design. In my experience the latter are nearly as efficient (losses are still dominated by resistive losses in the powered circuit itself), and are more flexible as they don't need to be tuned to the resonance of the device. (Not to mention they don't need an inductor.) My guess would be that, despite requiring an on-die inductor, the overall chip area required is much less than that of a switched-capacitor design. (You only need one circuit's worth of capacitance, vs. 3 or more for a switched design, which quadruples your die size....)
I'm actually somewhat skeptical of the 4000x claim though. Adiabatic circuits can typically only provide about a single order of magnitude power savings over traditional CMOS -- they still have resistive losses, they just follow a slightly different equation (f²RC²V², vs. fCV²). But RC and C are figures of merit for a given silicon process, and fRC (a dimensionless figure) is constrained by the operational principles of digital logic to the order of 0.1, which in turn constrains the power savings to that order of magnitude regardless of process. Where you can find excess savings though is simply by reducing operating frequency. Adiabatic circuits benefit more from this than traditional CMOS. Which is great if you're building something like a GPU which can trade clock frequency for core count.
Hi, someone pointed me at your comment, so I thought I'd reply.
First, the circuit techniques that aren't reversible aren't truly, fully adiabatic either -- they're only quasi-adiabatic. In fact, if you strictly follow the switching rules required for fully adiabatic operation, then (ignoring leakage) you cannot erase information -- none of the allowed operations achieve that.
Second, to say reversible operation "only saves an extra 20%" over quasi-adiabatic techniques is misleading. Suppose a given quasi-adiabatic technique saves 79% of the energy, and a fully adiabatic, reversible version saves you "an extra 20%" -- well, then now that's 99%. But, if you're dissipating 1% of the energy of a conventional circuit, and the quasi-adiabatic technique is dissipating 21%, that's 21x more energy efficient! And so you can achieve 21x greater performance within a given power budget.
Next, to say "resistive losses dominate the losses" is also misleading. The resistive losses scale down arbitrarily as the transition time is increased. We can actually operate adiabatic circuits all the way down to the regime where resistive losses are about as low as the losses due to leakage. The max energy savings factor is on the order of the square root of the on/off ratio of the devices.
Regarding "adiabatic circuits can typically only provide an order of magnitude power savings" -- this isn't true for reversible CMOS! Also, "power" is not even the right number to look at -- you want to look at power per unit performance, or in other words energy per operation. Reducing operating frequency reduces the power of conventional CMOS, but does not directly reduce energy per operation or improve energy efficiency. (It can allow you to indirectly reduce it though, by using a lower switching voltage.)
You are correct that adiabatic circuits can benefit from frequency scaling more than traditional CMOS -- since lowering the frequency actually directly lowers energy dissipation per operation in adiabatic circuits. The specific 4000x number (which includes some benefits from scaling) comes from the analysis outlined in this talk -- see links below - but we have also confirmed energy savings of about this magnitude in detailed (Cadence/Spectre) simulations of test circuits in various processes. Of course, in practice the energy savings is limited by the resonator Q value. And a switched-capacitor design (like a stepped voltage supply) would do much worse, due to the energy required to control the switches.
https://www.sandia.gov/app/uploads/sites/210/2023/11/Comet23... https://www.youtube.com/watch?v=vALCJJs9Dtw
Happy to answer any questions.
Thanks for the reply, was actually hoping you'd pop over here.
I don't think we actually disagree on anything. Yes, without reverse circuits you are limited to quasi-adiabatic operaton. But, at least in the architectures I'm familiar with (mainly PFAL), most of the losses are unarguably resistive. As I understand PFAL, it's only when the operating voltage of a given gate drops below Vth that the (macro) information gets lost and reversibility provides benefit, which is only a fraction of the switching cycle. At least for PFAL the figure is somewhere in the 20% range IIRC. (I say "macro" because of course the true energy of information is much smaller than the amounts we're talking about.)
The "20%" in my comment I meant in the multiplicative sense, not additive. I.e. going from 79% savings to 83.2%, not 99%. (I realize that wasn't clear.)
What I find interesting is reversibility isn't actually necessary for true adiabatic operation. All that matters is the information of where charge needs to be recovered from can be derived somehow. This could come from information available elsewhere in the circuit, not necessarily the subsequent computations reversed. (Thankfully, quantum non-duplication does not apply here!)
I agree that energy per operation is often more meaningful, BUT one must not lose sight of the lower bounds on clock speed imposed by a particular workload.
Ah thanks for the insight into the resonator/switched-cap tradeoff. Yes, capacitative switching designs which are themselves adiabatic I know is a bit of a research topic. In my experience the losses aren't comparable to the resistive losses of the adiabatic circuitry itself though. (I've done SPICE simulations using the sky130 process.)
It's been a while since I looked at it, but I believe PFAL is one of the not-fully-adiabatic techniques that I have a lot of critiques of.
There have been studies showing that a truly, fully adiabatic technique in the sense I'm talking about (2LAL was the one they checked) does about 10x better than any of the other "adiabatic" techniques. In particular, 2LAL does a lot better than PFAL.
> reversibility isn't actually necessary
That isn't true in the sense of "reversible" that I use. Look at the structure of the word -- reverse-able. Able to be reversed. It isn't essential that the very same computation that computed some given data is actually applied in reverse, only that no information is obliviously discarded, implying that the computation always could be reversed. Unwanted information still needs to be decomputed, but in general, it's quite possible to de-compute garbage data using a different process than the reverse of the process that computed it. In fact, this is frequently done in practice in typical pipelined reversible logic styles. But they still count as reversible even though the forwards and reverse computations aren't identical. So, I think we agree here and it's just a question of terminology.
Lower bounds on clock speed are indeed important; generally this arises in the form of maximum latency constraints. Fortunately, many workloads today (such as AI) are limited more by bandwidth/throughput than by latency.
I'd be interested to know if you can get energy savings factors on the order of 100x or 1000x with the capacitive switching techniques you're looking at. So far, I haven't seen that that's possible. Of course, we have a long way to go to prove out those kinds of numbers in practice using resonant charge transfer as well. Cheers...
PFAL has both a fully adiabatic and quasi-adiabatic configuration. (Essentially, the "reverse" half of a PFAL gate can just be tied to the outputs for quasi-adiabatic mode.) I've focused my own research on PFAL because it is (to my knowledge) one of the few fully adiabatic families, and of those, I found it easy to understand.
I'll have to check out 2LAL. I haven't heard of it before.
No, even with a fully adiabatic switched-capacitance driver I don't think those figures are possible. The maximum efficiency I believe is 1-1/n, n being the number of steps (and requiring n-1 capacitors). But the capacitors themselves must each be an order of magnitude larger than the adiabatic circuit itself. So it's a reasonable performance match for an adiabatic circuit running at "max" frequency, with e.g. 8 steps/7 capacitors, but 100x power reduction necessary to match a "slowed" adiabatic circuit would require 99 capacitors... which quickly becomes infeasible!
Yeah, 2LAL (and its successor S2LAL) uses a very strict switching discipline to achieve truly, fully adiabatic switching. I haven't studied PFAL carefully but I doubt it's as good as 2LAL even in its more-adiabatic version.
For a relatively up-to-date tutorial on what we believe is the "right" way to do adiabatic logic (i.e., capable of far more efficiency than competing adiabatic logic families from other research groups), see the below talk which I gave at UTK in 2021. We really do find in our simulations that we can achieve 4 or more orders of magnitude of energy savings in our logic compared to conventional, given ideal waveforms and power-clock delivery. (But of course, the whole challenge in actually getting close to that in practice is doing the resonant energy recovery efficiently enough.)
https://www.sandia.gov/app/uploads/sites/210/2022/06/UKy-tal... https://tinyurl.com/Frank-UKy-2021
The simulation results were first presented (in an invited talk to the SRC Decadal Plan committee) a little later that year in this talk (no video of that one, unfortunately):
https://www.sandia.gov/app/uploads/sites/210/2022/06/SRC-tal...
However, the ComET talk I linked earlier in the thread does review that result also, and has video.
How do the efficiency gains compare to speedups from photonic computing, superconductive computing, and maybe fractional Quantum Hall effect at room temperature computing? Given rough or stated production timelines, for how long will investments in reversible computing justify the relative returns?
Also, FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
"The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123 ... https://www.sciencedaily.com/releases/2011/06/110601134300.h... ;
> Abstract: ... Here we show that the standard formulation and implications of Landauer’s principle are no longer valid in the presence of quantum information. Our main result is that the work cost of erasure is determined by the entropy of the system, conditioned on the quantum information an observer has about it. In other words, the more an observer knows about the system, the less it costs to erase it. This result gives a direct thermodynamic significance to conditional entropies, originally introduced in information theory. Furthermore, it provides new bounds on the heat generation of computations: because conditional entropies can become negative in the quantum case, an observer who is strongly correlated with a system may gain work while erasing it, thereby cooling the environment.
I have concerns about density & cost for both photonic & superconductive computing. Not sure what one can do with quantum Hall effect.
Regarding long-term returns, my view is that reversible computing is really the only way forward for continuing to radically improve the energy efficiency of digital compute, whereas conventional (non-reversible) digital tech will plateau within about a decade. Because of this, within two decades, nearly all digital compute will need to be reversible.
Regarding bypassing the Landauer limit, theoretically yes, reversible computing can do this, but not by thermally cooling anything really, but rather by avoiding the conversion of known bits to entropy (and their energy to heat) in the first place. This must be done by "decomputing" the known bits, which is a fundamentally different process from just erasing them obliviously (without reference to the known value).
For the quantum case, I haven't closely studied the result in the second paper you cited, but it sounds possible.
/? How can fractional quantum hall effect be used for quantum computing https://www.google.com/search?q=How+can+a+fractional+quantum...
> Non-Abelian Anyons, Majorana Fermions are their own anti particles, Topologically protected entanglement
> In some FQHE states, quasiparticles exhibit non-Abelian statistics, meaning that the order in which they are braided affects the final quantum state. This property can be used to perform universal quantum computation
Anyon > Abelian, Non Abelian Anyons, Toffoli (CCNOT gate) https://en.wikipedia.org/wiki/Anyon#Abelian_anyons
Hopefully there's a classical analogue of a quantum delete operation that cools the computer.
There's no resistance for electrons in superconductors, so there's far less waste heat. But - other than recent advances with rhombohedral trilayer graphene and pentalayer graphene (which isn't really "graphene") - superconductivity requires super-chilling which is too expensive and inefficient.
Photons are not subject to the Landauer limit and are faster than electrons.
In the Standard Model of particle physics, Photons are Bosons, and Electrons are Leptons are Fermions.
Electrons behave like fluids in superconductors.
Photons behave like fluids in superfluids (Bose-Einstein condensates) which are more common in space.
And now they're saying there's a particle that only has mass when moving in certain directions; a semi-Dirac fermion: https://en.wikipedia.org/wiki/Semi-Dirac_fermion
> Because of this, within two decades, nearly all digital compute will need to be reversible.
Reversible computing: https://en.wikipedia.org/wiki/Reversible_computing
Reverse computation: https://en.wikipedia.org/wiki/Reverse_computation
Time crystals demonstrate retrocausality.
Is Hawking radiation from a black hole or from all things reversible?
What are the possible efficiency gains?
Customasm – An assembler for custom, user-defined instruction sets
Is there an ISA for WASM that's faster than RISC-V 64, which is currently 3x faster than x86_64 on x86_64 FWICS? https://github.com/ktock/container2wasm#emscripten-on-browse... demo: https://ktock.github.io/container2wasm-demo/
Show HN: WASM-powered codespaces for Python notebooks on GitHub
Hi HN!
Last year, we shared marimo [1], an open-source reactive notebook for Python with support for execution through WebAssembly [2].
We wanted to share something new: you can now run marimo and Jupyter notebooks directly from GitHub in a Wasm-powered, codespace-like environment. What makes this powerful is that we mount the GitHub repository's contents as a filesystem in the notebook, making it really easy to share notebooks with data.
All you need to do is prepend 'marimo.app' to any Python notebook on GitHub. Some examples:
- Jupyter Notebook: https://marimo.app/github.com/jakevdp/PythonDataScienceHandb...
- marimo notebook: https://marimo.app/github.com/marimo-team/marimo/blob/07e8d1...
Jupyter notebooks are automatically converted into marimo notebooks using basic static analysis and source code transformations. Our conversion logic assumes the notebook was meant to be run top-down, which is usually but not always true [3]. It can convert many notebooks, but there are still some edge cases.
We implemented the filesystem mount using our own FUSE-like adapter that links the GitHub repository’s contents to the Python filesystem, leveraging Emscripten’s filesystem API. The file tree is loaded on startup to avoid waterfall requests when reading many directories deep, but loading the file contents is lazy. For example, when you write Python that looks like
```python
with open("./data/cars.csv") as f: print(f.read())
# or
import pandas as pd pd.read_csv("./data/cars.csv")
```
behind the scenes, you make a request [4] to https://raw.githubusercontent.com/<org>/<repo>/main/data/car....
Docs: https://docs.marimo.io/guides/publishing/playground/#open-no...
[1] https://github.com/marimo-team/marimo
[2] https://news.ycombinator.com/item?id=39552882
[3] https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded...
[4] We technically proxy it through the playground https://marimo.app to fix CORS issues and GitHub rate-limiting.
> CORS and GitHub
The Godot docs mention coi-serviceworker; https://github.com/orgs/community/discussions/13309 :
gzuidhof/coi-serviceworker: https://github.com/gzuidhof/coi-serviceworker :
> Cross-origin isolation (COOP and COEP) through a service worker for situations in which you can't control the headers (e.g. GH pages)
CF Pages' free unlimited bandwidth and gitops-style deploy might solve for apps that require more than the 100GB software cap of free bandwidth GH has for open source projects.
Thanks for sharing these resources
> [ FUSE to GitHub FS ]
> Notebooks created from GitHub links have the entire contents of the repository mounted into the notebook's filesystem. This lets you work with files using regular Python file I/O!
Could BusyBox sh compiled to WASM (maybe on emscripten-forge) work with files on this same filesystem?
"Opening a GitHub remote with vscode.dev requires GitHub login? #237371" ... but it works with Marimo and JupyterLite: https://github.com/microsoft/vscode/issues/237371
Does Marimo support local file system access?
jupyterlab-filesystem-access only works with Chrome?: https://github.com/jupyterlab-contrib/jupyterlab-filesystem-...
vscode-marimo: https://github.com/marimo-team/vscode-marimo
"Normalize and make Content frontends and backends extensible #315" https://github.com/jupyterlite/jupyterlite/issues/315
"ENH: Pluggable Cloud Storage provider API; git, jupyter/rtc" https://github.com/jupyterlite/jupyterlite/issues/464
Jupyterlite has read only access to GitHub repos without login, but vscode.dev does not.
Anyways, nbreproduce wraps repo2docker and there's also a repo2jupyterlite.
nbreproduce builds a container to run an .ipynb with: https://github.com/econ-ark/nbreproduce
container2wasm wraps vscode-container-wasm: https://github.com/ktock/vscode-container-wasm
container2wasm: https://github.com/ktock/container2wasm
Scientists Discover Underground Water Vault in Oregon 3x the Size of Lake Mead
Would have been better to not admit it was there or have hidden it.
The last things we need are more clean water reserves sucked dry by Intel rather than it focusing on guaranteeing its own supplies through other means, or used as excuses for Southern California not to continue its (admittedly in some counties admirable) efforts to get water resource management under control in concert with other states in the watersheds.
Resource curse.
TIL semiconductor manufacturing uses a lot of water, relative to other production processes? And energy, for photolithography.
(Edit: nanoimprint lithography may have significantly lower resource requirements than traditional lithography? https://arstechnica.com/reviews/2024/01/canon-plans-to-disru... : "will be “one digit” cheaper and use up to 90 percent less power" )
Datacenters too;
"Next-generation datacenters consume zero water for cooling" (2024) https://news.ycombinator.com/item?id=42376406
Most datacenters have no way to return their boiled, sterilized water for water treatment, and so they don't give or sell datacenter waste water back, it takes heat with it when it is evaporated.
From https://news.ycombinator.com/item?id=42454547#42460317 :
> FWIU, datacenters are unable to sell their waste heat, boiled sterilized steam and water, unused diesel, and potentially excess energy storage.
"Ask HN: How to reuse waste heat and water from AI datacenters?" (2024) https://news.ycombinator.com/item?id=40820952
How did it form?
How does it affect tectonics on the western coast of the US?
Cascade Range > Geology: https://en.wikipedia.org/wiki/Cascade_Range#Geology
https://www.sci.news/othersciences/geophysics/hikurangi-wate... :
> Revealed by 3D seismic imaging, the newly-discovered [Hikirangi] water reservoir lies 3.2 km (2 miles) under the ocean floor off the coast of New Zealand, where it may be dampening a major earthquake fault that faces the country’s North Island. The fault is known for producing slow-motion earthquakes, called slow slip events. These can release pent-up tectonic pressure harmlessly over days and weeks
The "Slow earthquake" wikipedia article mentions the northern Cascades as a research area of interest: https://en.wikipedia.org/wiki/Slow_earthquake
They say water has a fingerprint; a hydrochemical and/or a geochemical footprint?
Is the water in the reservoir subducted from the surface or is it oozing out of the Earth?
"Dehydration melting at the top of the lower mantle" (2014) https://www.science.org/doi/abs/10.1126/science.1253358 :
> Summary: [...] Schmandt et al. combined seismological observations beneath North America with geodynamical modeling and high-pressure and -temperature melting experiments. They conclude that the mantle transition zone — 410 to 660 km below Earth's surface — acts as a large reservoir of water.
Physicists who want to ditch dark energy
From https://news.ycombinator.com/item?id=36222625#36265001 :
> Further notes regarding Superfluid Quantum Gravity (instead of dark energy)
A 'warrior' brain surgeon saved his Malibu street from wildfires and looters
> training, N95 masks, sourced fire hoses
> sprinklers in the roof, cement tiles instead of wood
> Dr Griffiths, who is also a doctor to the LA Kings hockey team, said if one thing can come from the devastating tragedy, he wants people to get to know their neighbours.
From "Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?" https://news.ycombinator.com/item?id=42665860 :
> Next-Gen Firefighting Aircraft (working title: NGFFA)
Refactoring with Codemods to Automate API Changes
From "Show HN: Codemodder – A new codemod library for Java and Python" (2024) https://news.ycombinator.com/item?id=39111747 :
> [ codemodder-python, libCST, MOSES and Holman's elegant normal form, singnet/asmoses, Formal Verification, ]
How do photons mediate both attraction and repulsion?
Additional recent findings in regards to photons (and attraction and repulsion) from the past few years:
"Scientists discover laser light can cast a shadow" https://news.ycombinator.com/item?id=42231644 :
- "Shadow of a laser beam" (2024) https://opg.optica.org/optica/fulltext.cfm?uri=optica-11-11-...
- "Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315
- "Ultrafast opto-magnetic effects in the extreme ultraviolet spectral range" (2024) https://www.nature.com/articles/s42005-024-01686-7 .. https://news.ycombinator.com/item?id=40911861
- 2024: better than the amplituhedron (for avoiding some Feynmann diagrams) .. "Physicists Reveal a Quantum Geometry That Exists Outside of Space and Time" (2024) https://www.quantamagazine.org/physicists-reveal-a-quantum-g... :
- "All Loop Scattering As A Counting Problem" (2023) https://arxiv.org/abs/2309.15913
- "All Loop Scattering For All Multiplicity" (2023) https://arxiv.org/abs/2311.09284
And then gravity and photons:
- "Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) https://journals.aps.org/pra/abstract/10.1103/PhysRevA.108.0...
- "Photonic implementation of quantum gravity simulator" (2024) https://www.spiedigitallibrary.org/journals/advanced-photoni... .. https://news.ycombinator.com/item?id=42506463
- "Graviton to Photon Conversion via Parametric Resonance" (2023) https://arxiv.org/abs/2205.08767 .. "Physicists discover that gravity can create light" (2023) https://news.ycombinator.com/item?id=35633291#35674794
What about the reverse; of gravitons produce photons, can photons create gravity?
- "All-optical complex field imaging using diffractive processors" (2024) https://www.nature.com/articles/s41377-024-01482-6 .. "New imager acquires amplitude and phase information without digital processing" (2024) https://www.google.com/amp/s/phys.org/news/2024-05-imager-am...
- "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
- "Experimental evidence that a photon can spend a negative amount of time in an atom cloud" (2024) https://www.nature.com/articles/35018520 .. https://www.impactlab.com/2024/10/13/photons-defy-time-new-q... ; Rubidium
- "Gain-assisted superluminal light propagation" (2000) https://www.nature.com/articles/35018520 ; Cesium
- "Exact Quantum Electrodynamics of Radiative Photonic Environments" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... .. "New theory reveals the shape of a single photon"
- "Physicists Magnetize a Material with Light" (2025) https://news.ycombinator.com/item?id=42628841
Ask HN: What are the biggest PITAs about managing VMs and containers?
I’ve been asked to write a blog post about “The PITA of managing containers and VMs.”
It's meant to be a rant listicle (with explanations as appropriate). What should I be sure to include?
One of the goals of containers are to unify the development and deployment environments. I hate developing and testing code in containers, so I develop and test code outside them and then package and test it again in a container.
Containerized apps need a lot of special boilerplate to determine how much CPU and memory they are allowed to use. It’s a lot easier to control resource limits with virtual machines because the application in the system resources are all dedicated to the application.
Orchestration of multiple containers for dev environments is just short of feature complete. With Compose, it’s hard to bring down specific services and their dependencies so you can then rebuild and rerun. I end up writing Ansible playbooks to start and stop components that are designed to be executed in particular sequences. Ansible makes it hard to detach a container, wait a specified time, and see if it’s running. Compose just needs to be updated to support management of shutting down and restarting containers, so I can move away from Ansible.
Services like Kafka that query the host name and broadcast it are difficult to containerize since the host name inside the container doesn’t match the external host name. Requires manual overrides which are hard to specify at run time because the orchestrators don’t make it easy to pass in the host name to the container. (This is more of a Kafka issue, though.)
Systemd, k8s, Helm, and Terraform model service dependencies.
Quadlet is the podman recommended way to do podman with systemd instead of k8s.
Podman supports kubes of containers and pods of containers;
man podman-container
man podman-generate-kube
man podman-kube
man podman-pod
`podman generate kube` generates YAML for `podman kube play` and for k8s `kubectl`.Podman Desktop can create a local k8s (kubernetes) cluster with any of kind, minikube, or openshift local. k3d and rancher also support creating one-node k8s clusters with minimal RAM requirements for cluster services.
kubectl is the utility for interacting with k8s clusters.
k8s Ingress API configures DNS and Load Balancing (and SSL certs) for the configured pods of containers.
E.g. Traefik and Caddy can also configure the load balancer web server(s) and request or generate certs given access to a docker socket to read the labels on the running containers to determine which DNS domains point to which containers.
Container labels can be specified in the Dockerfile/Containerfile, and/or a docker-compose.yml/compose.yml, and/or in k8s yaml.
Compose supports specifying a number of servers; `docker compose up web=3`.
Terraform makes consistent.
Compose does not support rolling or red/green deployment strategies. Does compose support HA high-availability deployments? If not, justify investing in a compose yaml based setup instead of k8s yaml.
Quadlet is the way to do podman containers without k8s; with just systemd for now.
Thanks! I’ll take a look at quadlet.
I find that I tend to package one-off tasks as containers as well. For example, create database tables and users. Compose supports these sort of things. Ansible actually makes it easy to use and block on container tasks that you don’t detach.
I’m not interested in running kubernetes, even locally.
Podman kube has support for k8s Jobs now: https://github.com/containers/podman/pull/23722
k8s docs > concepts > workloads > controllers > Jobs: https://kubernetes.io/docs/concepts/workloads/controllers/jo...
Ingress, Deployment, StatefulSets,: https://news.ycombinator.com/item?id=37763931
Northeastern's curriculum changes abandon fundamentals of computer science
Racket: https://learnxinyminutes.com/racket/
Python: https://learnxinyminutes.com/python/
pyret?
pyret: https://pyret.org/pyret-code/ :
> Why not just use Java, Python, Racket, OCaml, or Haskell?
IMHO fun educational learning languages aren't general purpose or production ready; and so also at this point in my career I would appreciate a more reusable language in a CS curriculum.
Python isn't CS pure like [favorite lisp], but it is a language coworkers will understand, it supports functional and object-oriented paradigms, and the pydata tools enable CS applications in STEM.
A lot of AI and ML code is written in Python, with C/Rust/Go.
There's an AIMA Python, but there's not a pyret or a racket AIMA or SICP, for example.
"Why MIT Switched from Scheme to Python (2009)" https://news.ycombinator.com/item?id=14167453
Computational thinking > Characteristics: https://en.wikipedia.org/wiki/Computational_thinking#Charact...
"Ask HN: Which school produces the best programmers or software engineers?" https://news.ycombinator.com/item?id=37581843
overleaf/learn/Algorithms re: nonexecutable LaTeX ways to specify algorithms: https://www.overleaf.com/learn/latex/Algorithms
Book: "Classic Computer Science Algorithms in Python"
coding-problems: https://github.com/MTrajK/coding-problems
coding-interview-university: https://github.com/jwasham/coding-interview-university
coding-interview-university lists "Computational complexity" but not "Computational thinking", which possibly isn't that different from WBS (work breakdown structure), problem decomposition and resynthesis, and the Scientific Method
Ask HN: How can I learn to better command people's attention when speaking?
I've noticed over the years that whenever I'm in group conversations in a social setting, people in general don't pay too much attention to what I say. For example, let's say the group is talking about travel and someone says something I find relatable e.g. someone mentions a place I've been to and really liked. When I try to contribute to the conversation, people just don't seem interested, and typically the conversation moves on as if I hadn't said anything. If I try to speak for a longer time (continuing with the travel example, let's say I try to talk about a particular attraction I enjoyed visiting at that location), I'm usually interrupted, and the focus shifts to whoever interrupted me.
This has happened (and still happens often) a lot, in different social circles, with people of diverse backgrounds. So, I figure it's not that I hang out with rude people, the problem must be me. I think the saddest part of all this is that even my wife's attention drifts off most of the time I try to talk to her.
I know it's not a language barrier issue, and I know for sure I enunciate my words well. I wonder though if the issue may be that I have a weak voice, or just an overall weak presence/body language. How can that be improved, if that's the case?
Could building rapport help?
Book: "Power Talk: Using Language to Build Authority and Influence" (2001) https://g.co/kgs/6L8MxNy
- Speaking from the edge, Speaking from the center
"From Comfort Zone to Performance Management" (2009) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%E2... .. https://news.ycombinator.com/item?id=32786594 :
> The ScholarlyArticle also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages
Show HN: TabPFN v2 – A SOTA foundation model for small tabular data
I am excited to announce the release of TabPFN v2, a tabular foundation model that delivers state-of-the-art predictions on small datasets in just 2.8 seconds for classification and 4.8 seconds for regression compared to strong baselines tuned for 4 hours. Published in Nature, this model outperforms traditional methods on datasets with up to 10,000 samples and 500 features.
The model is available under an open license: a derivative of the Apache 2 license with a single modification, adding an enhanced attribution requirement inspired by the Llama 3 license: https://github.com/PriorLabs/tabpfn. You can also try it via API: https://github.com/PriorLabs/tabpfn-client
TabPFN v2 is trained on 130 million synthetic tabular prediction datasets to perform in-context learning and output a predictive distribution for the test data points. Each dataset acts as one meta-datapoint to train the TabPFN weights with SGD. As a foundation model, TabPFN allows for fine-tuning, density estimation and data generation.
Compared to TabPFN v1, v2 now natively supports categorical features and missing values. TabPFN v2 performs just as well on datasets with or without these. It also handles outliers and uninformative features naturally, problems that often throw off standard neural nets.
TabPFN v2 performs as well with half the data as the next best baseline (CatBoost) with all the data.
We also compared TabPFN to the SOTA AutoML system AutoGluon 1.0. Standard TabPFN already outperforms AutoGluon on classification and ties on regression, but ensembling multiple TabPFNs in TabPFN v2 (PHE) is even better.
There are some limitations: TabPFN v2 is very fast to train and does not require hyperparameter tuning, but inference is slow. The model is also only designed for datasets up to 10k data points and 500 features. While it may perform well on larger datasets, it hasn't been our focus.
We're actively working on removing these limitations and intend to release new versions of TabPFN that can handle larger datasets, have faster inference and perform in additional predictive settings such as time-series and recommender systems.
We would love for you to try out TabPFN v2 and give us your feedback!
anyone tried this? is this actually overall better than xgboost/catboost?
Benchmark of tabpfn<2 compared to xgboost, lightgbm, and catboost: https://x.com/FrankRHutter/status/1583410845307977733 .. https://news.ycombinator.com/item?id=33486914
A video tour of the Standard Model (2021)
Standard Model: https://en.wikipedia.org/wiki/Standard_Model
Mathematical formulation of the Standard Model: https://en.wikipedia.org/wiki/Mathematical_formulation_of_th...
Physics beyond the Standard Model: https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mo...
Story of the LaTeX representation of the standard model, from a comment re: "The deconstructed Standard Model equation" (2016): https://news.ycombinator.com/item?id=41753471#41772385
Can manim work with sympy expressions?
A standard model demo video could vary each (or a few) highlighted variables and visualize the [geometric,] impact/sensitivity of each
Excitons in the fractional quantum Hall effect
"Excitons in the Fractional Quantum Hall Effect" (2024) https://arxiv.org/abs/2407.18224
Fractional quantum Hall effect in rhombohedral, pentalayer graphene: https://news.ycombinator.com/item?id=42314581
Fractional quantum Hall effect: https://en.wikipedia.org/wiki/Fractional_quantum_Hall_effect
Superconducting nanostrip single photon detectors made of aluminum thin-films
"Superconducting nanostrip single photon detectors fabricated of aluminum thin-films" (2024) https://www.sciencedirect.com/science/article/pii/S277283072...
"Fabricating single-photon detectors from superconducting aluminum nanostrips" (2025) https://phys.org/news/2025-01-fabricating-photon-detectors-s...
Combining graphene and nanodiamonds for better microplasma devices
"High stability plasma illumination from micro discharges with nanodiamond decorated laser induced graphene electrodes" (2024) https://www.sciencedirect.com/science/article/pii/S277282852...
Physicists Magnetize a Material with Light
"Researchers discover new material for optically-controlled magnetic memory" (2024) https://phys.org/news/2024-08-material-optically-magnetic-me... ..
"Distinguishing surface and bulk electromagnetism via their dynamics in an intrinsic magnetic topological insulator" (2024) https://www.science.org/doi/10.1126/sciadv.adn5696
> MnBi2Te4
ScholarlyArticle: "Terahertz field-induced metastable magnetization near criticality in FePS3" (2024) https://www.nature.com/articles/s41586-024-08226-x
"Room temperature chirality switching and detection in a helimagnetic MnAu2 thin film" (2024) https://www.nature.com/articles/s41467-024-46326-4 .. https://scitechdaily.com/memory-breakthrough-helical-magnets... .. https://news.ycombinator.com/item?id=41921153
Can we create matter from light?
Indeed we can:
https://www.energy.gov/science/np/articles/making-matter-col...
Scientists find strong evidence for the long-predicted Breit-Wheeler effect—generating matter and antimatter from collisions of real photons.
Breit–Wheeler process > Experimental observations: https://en.wikipedia.org/wiki/Breit%E2%80%93Wheeler_process#...
Scientists find 'spooky' quantum entanglement within individual protons
Tell HN: ChatGPT can't show you a 5.25" floppy disk
Challenge: Come up with a query that makes it draw something resembling a 5.25" floppy, that is, without the metal shield and hub that is present on the 3.5" disks.
PostgreSQL Support for Certificate Transparency Logs Now Available
Are there Merkle hashes between the rows in the PostgreSQL CT store like there are in the Trillian CT store?
Sigstore Rekor also has centralized Merkle hashes.
Isn’t Rekor runs on top of Trillian?
German power prices turn negative amid expansion in renewables
Given the intraday prices, are there sufficient incentives to stimulate creation of energy storage businesses to sell the excess electricity back a couple hours or days later?
Or have the wasteful idiot cryptoasset miners been regulated out of existence such that there is no longer a buyer of last resort for excess rationally-subsidized clean energy?
Are the Duck Curve and Alligator curve problems the same in EU and other energy markets with and without intraday electricity prices?
Duck curve: Solar peaks around noon, but energy usage peaks around the evening commute and dinner; https://en.wikipedia.org/wiki/Duck_curve
Alligator curve: Wind peaks in the middle of the night;
https://fresh-energy.org/renewable-integration-in-the-midwes... :
> So, if we’re not duck-like, what is the Upper Midwest’s energy mascot? We give you: the Smilin’ Gator Curve. Unlike California’s ominous, faceless duck, Sally Gator welcomes the Midwest’s commitment to renewable generation!
(An excellent drawing of a cartoon municipally-bonded incumbent energy utility mascot!)
Can cryptoasset or data mining firms scale to increase demand for electricity when energy prices low or below zero? What are there relocation costs?
Is such a low subsidized energy price lucrative to energy storage, not-yet-PQ Proof-of-Work, and other data mining firms?
Can't GPE gravitational potential energy in old but secured mine shafts scale to meet energy storage requirements?
As far as I’m aware, at least what has been explained to me, storage is a smaller problem. The bigger problem is how do you transfer that energy from where it is stored to where it is needed. Wind farms in the sea help nothing in Bayern.
So, per your understanding, grid connectivity for energy production projects is a bigger issue than energy storage in their market?
Price falls below zero because there's a supply glut (and due to aggressive, excessive, or effective subsidies to accelerate our team transition to clean energy).
Is it the national or regional electricity prices that are falling below zero?
Is the price lower during certain hours of the day? If so, then energy storage could probably smooth that out.
FedNow supports ILP Interledger Protocol, which is an open spec that works with traditional ledgers and distributed cryptoasset ledgers.
> In addition to Peering, Clearing, and Settlement, ILP Interledger Protocol Specifies Addresses: https://news.ycombinator.com/item?id=36503888
>> ILP is not tied to a single company, payment network, or currency
ILP Addresses - v2.0.0 > Allocation Schemes: https://github.com/interledger/rfcs/blob/main/0015-ilp-addre...
People that argue for transaction privacy in blockchains: large investment banks, money launderers, the US Government when avoiding accountability because natsec.
Whereas today presumably there are database(s) of checks sent to contractors for the US Gvmt; and maybe auditing later.
Re: apparently trillions missing re: seasonal calls to "Audit the Fed! Audit DoD!" and "The Federal Funding Accountability and Transparency Act of 2006" which passed after Illinois started tracking grants: https://news.ycombinator.com/item?id=25893860
DHS helped develop W3C DIDs, which can be decentralizedly generated and optionally centrally registered or centrally generated and registered.
W3C Verifiable Credentials support DIDs Decentralized Identifiers.
Do not pay for closed source or closed spec capabilities; especially for inter-industry systems that would need to integrate around an API spec.
Do not develop another blockchain; given the government's inability to attract and retain talent in this space, it is unlikely that a few million dollars and government management would exceed the progress of billions invested in existing blockchains.
There's a lot of anti-blockchain FUD. Ask them to explain the difference between multi-primary SQL database synchronization system with off-site nodes (and Merkle hashes between rows), and a blockchain.
Why are there Merkle hashes in the centralized Trillian and now PostgreSQL databases that back CT Certificate Transparency logs (the logs of X.509 cert granting and revocations)?
Why did Google stop hosting a query endpoint for CT logs? How can single points of failure be eliminated in decentralized systems?
Blockchains are vulnerable to DoS Denial of Service like all other transaction systems. Adaptive difficulty and transaction fees that equitably go to miners or are just burnt are blockchain solutions to Denial of Service.
"Stress testing" to a web dev means something different than "stress testing" the banks of the Federal Reserve system, for example.
A webdev should know that as soon as your app runs out of (SQL) database connections, it will start throwing 500 Internal Server error. MySQL, for example, defaults to 150+1 max connections.
Stress testing for large banks does not really test for infosec resource exhaustion. Stress testing banks involves them making lots of typically large transactions; not lots of small transactions.
Web Monetization is designed to support micro payments, could support any ledger, and is built on ILP.
ILP makes it possible for e.g. 5x $100 transactions to be auditably grouped together. Normal, non bank of the US government payers must source liquidity from counter parties; which is easier to do with many smaller transactions.
Why do blockchains require additional counterparties in two party (payer-payee) transactions?
To get from USD to EUR, for example, sometimes it's less costly to go through CAD. Alice holds USD, Bob wants EUR, and Charlie holds CAD and EUR and accepts USD, but will only extend $100 of credit per party.
ripplenet was designed for that from the start. Interledger was contributed by ripplecorp to W3C as an open standard, and ILP has undergone significant revision since being open sources.
ILP does not require XRP, which - like XLM - is premined and has a transaction fee less than $0.01.
Ripplenet does not have Proof of Work mining: the list of transaction validator server IPs is maintained by pull request merge consensus in the GitHub repo.
The global Visa network claims to do something like 60,000 TPS. Bitcoin can do 6-7 TPS, and is even slower if you try and build it without blocks.
I thought I read that a stellar benchmark reached 10,000 TPS but they predicted that the TPS would be significantly greater with faster more expensive validation servers.
E.g. Crypto Kitties NFT smart contract game effectively DoS'd pre-sharding Ethereum, which originally did 15-30 TPS IIRC. Ethereum 2.0 reportedly intends to handles 100,000 TPS.
US Contractor payees would probably want to receive a stablecoin instead of a cryptoasset with high volatility.
Some citizens received a relief check to cash out or deposit, and others received a debit card for an account created for them.
I've heard that the relief loan program is the worst fraud in the history of the US government. Could any KYC or AML practices also help prevent such fraud? Does uploading a scan of a photo ID and/or routing and account numbers on a cheque make exchanges more accountable?
FWIU, only Canadian banks give customers the option to require approval for all deposits. Account holders do not have the option to deny deposits in the US, FWIU.
I don't think the US Government can acquire USDC. Awhile back stablecoin providers were audited and admonished.
A reasonable person should expect US Government backing of a cryptoasset to reduce volatility.
Large investment banks claimed to be saving the day on cryptoasset volatility.
High-frequency market makers claim to be creating value by creating liquidity at volatile prices.
They eventually added shorting to Bitcoin, which doesn't account for debt obligations; there is no debt within the Bitcoin network: either a transaction clears within the confirmation time or it doesn't.
There are no chargebacks in Bitcoin; a refund is an optional transaction between B and A, possibly with the same amount less fees.
There is no automatic rebilling in Bitcoin (and by extension other blockchains) because the payer does not disclose the private key necessary to withdraw funds in their account to payees.
Escrow can be done with multisig ("multi signature") transactions or with smart contracts; if at least e .g. 2 out of 3 parties approve, the escrowed transaction completes. So if Alice escrows $100 for Bob conditional upon receipt of a product from Bob, and Bob says she sent it and third-party Charlie says it was received, that's 2 out of 3 approving so Alice's $100 would then be sent to Bob.
All blockchains must eventually hard fork to PQ Post Quantum hashing and encryption, or keep hard forking to keep doubling non-PQ key sizes (if they are not already PQ).
PQ Post Quantum algos typically have a different number of characters, so any hard fork to PQ account keys and addresses will probably require changing data validation routines in webapps that handle transactions.
The coinbase field in a Bitcoin transaction struct can be used for correlating between blockchain transactions and rows in SQL database that claim to have valid data or metadata about a transaction; you put a unique signed value in the coinbase field when you create transactions, and your e.g. SQL or Accumulo database references the value stored in the coinbase field as a foreign key.
Crypto tax prep services can't just read transactions from public blockchains; they need exchange API access to get the price of the asset on that exchange at the time of that transaction: there's no on-chain price oracle.