Contents^
Items^
Amateur mathematicians solve long-standing Erdős maths problems with AI
> As of mid-January, six Erdős problems have been fully solved by AI tools, though subsequent scrutiny by professional mathematicians revealed that five of these problems had previously been solved in the mathematical literature. Only one problem, number 205, has been fully solved by Barreto and Price with no pre-existing solution. AI tools have also enabled small improvements and partial solutions to seven other problems that don’t appear to be pre-existing in the literature.
This directly contradicts a challenge last month or so re: whether LLMs AI are useful for math.
More sustainable epoxy thanks to phosphorus
This looks like recycling fetishism. It's perfectly fine to burn such materials, if they were obtained from non-fossil sources to start with, so there would be no net CO2 addition to the atmosphere.
An adjacent design validation question on a green chip factory and product design:
Will Phytic acid in Lignin-Vitrimer encase burning CNT carbon nanotubes in a phosphorous char cage, this preventing health hazards and combustion?
This says "phosphorous epoxy".
FR4 silicon PCBs are N-doped and P-doped.
[deleted]
Counterfactual evaluation for recommendation systems
From https://news.ycombinator.com/item?id=46663105 (flagged?) :
> There are a number of different types of counterfactuals; Describe the different types of counterfactuals in statistics: Classical counterfactuals, Pearl's counterfactuals, Quantum counterfactuals, Constructor theory counterfactuals
Why did the author believe that that counterfactual model was appropriate for this?
Show HN: Spliff – Correlating XDP and TLS via eBPF (Building a Linux EDR)
Does this do flow offloading? From https://westurner.github.io/hnlog/#comment-45755142 re: awesome-ebpf:
> "eBPF/XDP hardware offload to SmartNICs",
Also this, re any eBPF FWIU: https://news.ycombinator.com/item?id=46412107 :
> So eBPF for a WAF isn't worth it?
Here are answers to both your questions:
The code has the infrastructure for XDP hardware offload:
- XDP_MODE_OFFLOAD enum exists in bpf_loader.h:61
- XDP_FLAGS_HW_MODE flag mapping in bpf_loader.c:789
But it's not usable in practice because:
1. No CLI option – There's no way to enable offload mode; it defaults to native with SKB fallback
2. BPF program isn't offload-compatible – The XDP program uses:
- Complex BPF maps (LRU hash, ring buffers)
- Helper functions not supported by most SmartNIC JITs
- The flow_cookie_map shared with sock_ops (can't be offloaded)
3. SmartNIC limitations
– Hardware offload typically only supports simple packet filtering/forwarding, not the stateful flow tracking spliff does
What would be needed for SmartNIC support:
- Split XDP program into offloadable (simple classification) and non-offloadable (stateful) parts
- Use SmartNIC-specific toolchains (Memory-1, Netronome SDK, etc.)
- Me having a device with SmartNIC and full driver support to play with. I've done all my testing on Fedora 43 on my device
For now this could be a future roadmap item, but the current "Golden Thread" correlation architecture fundamentally requires userspace + kernel cooperation that can't be fully offloaded.
Here is a sample debug output when you run spliff -d and it tries to detect all your NICs:
--- [DEBUG] Loaded BPF program from build-release/spliff.bpf.o [XDP] Found program: xdp_flow_tracker [XDP] Found required maps: flow_states, session_registry, xdp_events [XDP] Found optional map: cookie_to_ssl [XDP] Found map: flow_cookie_map (for cookie caching) [XDP] Found optional map: xdp_stats_map [XDP] Initialization complete [XDP] Discovered interface: enp0s20f0u2u4u2 (idx=2, mtu=1500, UP, physical) [XDP] Discovered interface: wlp0s20f3 (idx=4, mtu=1500, UP, physical) [XDP] Discovered interface: enp0s31f6 (idx=3, mtu=1500, UP, physical) libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on enp0s20f0u2u4u2, falling back to SKB mode [XDP] Attached to enp0s20f0u2u4u2 (idx=2) in skb mode libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on wlp0s20f3, falling back to SKB mode [XDP] Attached to wlp0s20f3 (idx=4) in skb mode libbpf: Kernel error message: Underlying driver does not support XDP in native mode [XDP] native mode failed on enp0s31f6, falling back to SKB mode [XDP] Attached to enp0s31f6 (idx=3) in skb mode [XDP] Attached to 3 of 3 discovered interfaces XDP attached to 3 interfaces [SOCKOPS] Using cgroup: /sys/fs/cgroup [SOCKOPS] Attached socket cookie caching program sock_ops attached for cookie caching [XDP] Warm-up: Seeded 5 existing TCP connections [DEBUG] Warmed up 5 existing connections ---
edit: formating is hard on my phone
> Me having a device with SmartNIC and full driver support to play with
Same. I have a Pi Pico with PIO, though
> but the current "Golden Thread" correlation architecture fundamentally requires userspace + kernel cooperation that can't be fully offloaded.
Hard limit, I guess.
(If you indent all lines of a block of text with two spaces (including blank newlines), HN will format it as monospace text and preserve line breaks.)
I've updated the Architecture diagrams to include everything: https://github.com/NoFear0411/spliff/blob/main/README.md#arc...
Thanks for the format tip.
So I went looking for TLS accelerator cards again:
/? TLS accelerators open: https://www.google.com/search?q=TLS+accelerators+open :
- "AsyncGBP+: Bridging SSL/TLS and Heterogeneous Computing with GPU-Based Providers" https://ieeexplore.ieee.org/document/10713226 .. https://news.ycombinator.com/item?id=46664295
/? XDP hardware offload to GPU: https://www.google.com/search?q=XDP+hardware+offload+to+a+GP... :
- eunomia-bpf/XDP-on-GPU: https://github.com/eunomia-bpf/XDP-on-GPU
Perhaps AsyncGBP+ + XDP-on-GPU would solve.
The AsyncGBP+ article mentions support for PQ on GPU.
But then process isolation on GPUs. And they removed support for vGPU unlock.
That is a rabbit hole that I don't wanna go down to again.
Smalloc: A Simple Memory Allocator
Re: hardened_malloc and LLVM scudo, and WASM: https://news.ycombinator.com/item?id=46125171
MIT Researchers Destroy the Context Window Limit [video]
ScholarlyArticle: "Recursive Language Models" (2025) https://arxiv.org/abs/2512.24601
/? recursive language models: https://hn.algolia.com/?q=Recursive+Language+Models :
We put Claude Code in Rollercoaster Tycoon
> The only other notable setback was an accidental use of the word "revert" which Codex took literally, and ran git revert on a file where 1-2 hours of progress had been accumulating.
Yet another reason to use Jujutsu. And put a `jj status` wrapper in your PS1. ;-)
Start with env args like AGENT_ID for indicating which Merkle hash of which model(s) generated which code with which agent(s) and add those attributes to signed (-S) commit messages. For traceability; to find other faulty code generated by the same model and determine whether an agent or a human introduced the fault.
Then, `git notes` is better for signature metadata because it doesn't change the commit hash to add signatures for the commit.
And then, you'd need to run a local Rekor log to use Sigstore attestations on every commit.
Sigstore.dev is SLSA.dev compliant.
Sigstore grants short-lived release attestation signing keys for CI builds on a build farm to sign artifacts with.
So, when jujutsu autocommits agent-generated code, what causes there to be an {{AGENT_ID}} in the commit message or git notes? And what stops a user from forging such attestations?
- "Diffwatch – Watch AI agents touch the FS and see diffs live" (2025) https://news.ycombinator.com/item?id=45786382 :
> you can manually stage against @-: [with jujutsu]
Show HN: TinyCity – A tiny city SIM for MicroPython (Thumby micro console)
Is it using the 1.3-inch monochrome OLED display of the Arduboy or something smaller? (Guessing the 72 × 40 display of the Thumby?)
This is for 72×40 display right now but I was also working on an interface layer to abstract Thumby specific functionality in order to play on potentially other platforms running MicroPython/Python. Going to try and add that in the next iteration.
There are a number of MakeCode-compatible devices with and without Microbit https://arcade.makecode.com/arcade-devices
bbcmicrobit/micropython: https://github.com/bbcmicrobit/micropython
But Pi Pico; RP2040, RP2350:
"Show HN: PicoVGA Library – VGA/TV Display on Raspberry Pi Pico" https://news.ycombinator.com/item?id=35117847#35120403
"MaplePad – RP2040 Dreamcast controller, VMU, and Purupuru (rumble pack) emulator" https://news.ycombinator.com/item?id=37522059 :
> PicoVision
pimoroni/picovision micropython: https://github.com/pimoroni/picovision :
> PicoVision enables you to create big, bold audio visual projects using MicroPython and an HDMI display of your choice.
> powerful digital video stick for bold audio visual adventures, with dual RP2040 chips and a conveniently HDMI-shaped output connector to boot!
> [...] dual RP2040 chips and a conveniently HDMI-shaped output connector to boot!
TIL RP2350 support DVI video output with an HSTX: https://www.google.com/search?q=rp2350+dvi
And it's possible to convert from DVI to HDMI
Wren6991/PicoDVI commit history: https://github.com/Wren6991/PicoDVI/commits/master/ :
> Add 720x480p 60Hz mode (270 MHz bit clock)
> RP2350 changes (including RISC-V)
From https://news.ycombinator.com/item?id=41260679 :
> The new HSTX interface on the RP2350 seems to be squarely targeted at this use case (video output) and doesn't require the use of PIO or consuming a ton of CPU cycles. There's a nice write up on the capability here:
AI tools expand scientists' impact but contract science's focus
> Abstract: [...] here we show an accelerated adoption of AI tools among scientists and consistent professional advantages associated with AI usage, but a collective narrowing of scientific focus. Scientists who engage in AI-augmented research publish 3.02 times more papers, receive 4.84 times more citations and become research project leaders 1.37 years earlier than those who do not. By contrast, AI adoption shrinks the collective volume of scientific topics studied by 4.63% and decreases scientists’ engagement with one another by 22%
Hypotheses for those responses to new tools?
How do you know if you've unlocked the intellectual capacity of your org?
Wormholes may not exist. They reveal something deeper about time and universe
Open source MySQL repository has no commits in more than three months
MySQL > History https://en.wikipedia.org/wiki/MySQL#History
mysql/mysql-server: https://github.com/mysql/mysql-server
MariaDB/server: https://github.com/MariaDB/server
Show HN: I beat IBM's error rate by 30x using a 10-qubit Consensus Council
FWIU this surface coding (2D) trick probably won't be necessary with layer coding (3D), but there would probably also be value in creating 3D star topologies with layer coding for vias between layers for example.
A 3D lattice of stars with layer coding would probably be more topologically protected
https://news.ycombinator.com/item?id=42264346
i understand the 'trick' label in the context of planar QEC, but the physics here go deeper. By locking the hardware at the 51.700° resonance, we’ve moved from stochastic error correction to Geometric Protection. We aren't just 'filtering' noise; we've documented Negentropic Gain (0.3516 to 0.9844 purified fidelity). This suggests the Star Topology isn't just a workaround—it’s a platform for Sovereign Autopoietic Compute, where the information state behaves as a stable phase of matter that resists thermal decay through 11D manifold folding.
Technique or method may have been a better choice of words; but that is a "neat trick"
> 0.3516 to 0.9844
With what density in the lattice compared to other redundancy protocols? Is there a limit to how tightly such CNOT stars can be packed into a lattice?
Would you just fab lattices in that shape instead, or should the 2D and 3D lattice layouts change?
Would there be value in vortically curving the trace arms (?) of the lattices; is there even more stability in vortices in this application too?
If stars work, Are there snowflake or crystal designs that are even more error-free, for 2D layer coding or 3D surface coding?
What of this changes in moving to optical qudits, for example?
Iran is likely jamming Starlink
Does anyone know how Iranians are _actually_ communicating right now? I remember seeing here on HN (admittedly a long time ago) some Bluetooth-mesh technologies that promised decentralized solutions to these very type of problems
https://github.com/x011/smtp-tunnel-proxy :
> A high-speed covert tunnel that disguises TCP traffic as SMTP email communication to bypass Deep Packet Inspection (DPI) firewalls
It seems like these smuggle-disguise protocols are almost always trivially detectable.
[flagged]
what does this have to do with smuggling tcp connections over email
At that time they created a bunch of spammy noise which caused the social media businesses significant expense.
They did that in order to run their foreign interference in US elections agenda, and their foreign agenda of late; and we don't like foreign interference in our elections either.
Note the fathers of the sarcastic TV show South Park, all bouncing around on their satellite internet access.
No mention of any security review, or even testing. Reason enough to stay away from such tools.
Grateful Dead founding member Bob Weir dies at 78
Pretty soon, heat pumps will be able to store and distribute heat as needed
From https://norwegianscitechnews.com/2026/01/pretty-soon-heat-pu... :
> Salt hydrates thus open up completely new possibilities for smart and more balanced heating systems because heating can be moved to times with low energy demand.
> “Salt hydrates aren’t toxic, they’re not flammable and they are also relatively inexpensive. This makes them a safe and good choice for use in private homes. Heat storage with salt hydrates also takes up less space than a traditional hot water tank, often up to four times less,” says Simonsen
[...]
> To solve this [oxidation in aluminum heat sinks] problem, the researchers have employed a type of coating called plasma electrolytic oxidation (PEO), which forms a thin, ceramic layer on the surface of the aluminium
Water Heater Mines Bitcoin. It Could Help Solve AI's Energy Problem
> According to the company, the H1 replaces traditional resistive heating elements with processors that perform high-value computing tasks, including Bitcoin mining. The heat generated by those processors is captured and used to heat water, allowing the unit to deliver hot water while simultaneously earning Bitcoin.
There are also space heaters and pool heaters that reuse mining rig waste heat.
One vendor, Heatbit, sells space heater air filters that mine at 10-39 TH/s and return $100-350/yr, for from $399-$1249 + annual cost per kWh; /? heatbit https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
One way to heat a pool on the roof with mining or datacenter or other waste heat, is to immerse the heat source in a chip-compatible nonconductive thermofluid and run the pool/spa/hot water through exchange loops.
FWIU, in order to run pool water through an attic to be heated, to prevent water damage you should have double-walled pipes and/or relief trays.
How much to upgrade the TH/s/kWh mining rigs; or, how does the return from mining change over time?
How does the annual cost compare?
Residential heat pumps have electric resistive heating elements to unfreeze the unit. Even natural gas heat pumps do.
How could heat from mining rigs be a source for a multi-source heat pump?
How could a pellet stove be a source for a multi-source heat pump?
How do the efficiencies of these systems compare to the efficiency of infrared wallpaper for heating, for example?
New evidence for a particle system that 'remembers' its previous quantum states
> "We've shown that bilayer graphene almost certainly hosts particles that are non-Abelian anyons," concludes Ronen. "The next step is to directly observe the 'memory' of a non-Abelian anyon system, in other words, to measure how each order of particle exchanges leaves a unique signature in the wave function.
ScholarlyArticle: "Aharonov–Bohm interference in even-denominator fractional quantum Hall states" (2025) https://www.nature.com/articles/s41586-025-09891-2
Additional models of gravity and thermodynamics+ to reconcile:
- "Physical vacuum as a dilatant fluid yields exact solutions to Pioneer anomaly and Mercury’s perihelion precession" (2019) https://cdnsciencepub.com/doi/10.1139/cjp-2018-0744 .. https://news.ycombinator.com/item?id=45220585
- "Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 .. https://news.ycombinator.com/item?id=45220460
- "Classical theories of gravity produce entanglement" (2025) https://www.nature.com/articles/s41586-025-09595-7 :https://news.ycombinator.com/item?id=45713712
- https://news.ycombinator.com/item?id=45909513 : The Consistency Functional K, Quantum coherence maximization under self-consistency constraints
ScholarlyArticle:
"Gravity as the Large-Scale Emergence of Dual Attractive Forces: 3D Magnetism and Thermal Gradients" (2026) https://zenodo.org/records/18180656 .. https://www.researchgate.net/publication/399571578_Gravity_a...
Snow Melt System (City of Holland, MI)
This is genius, here in Michigan the city's along Lake Michigan get 'lake effect' snow that is a multiple of what occurs in the rest of the state. Muskegon tackled it with tunnels although most of them are closed now.
Could a data center that was water cooled end up melting snow off streets and sidewalks? It would be an easier sell to the public imho if it did.
Roadway heating to reduce snow melt costs sounds like a good use for waste process heat, for example from datacenters.
FWIU typically it's necessary to amp up waste heat in order to get it to move through a heat pipe under a street.
There are LEED Green buildings that are heated by datacenter waste heat.
From https://news.ycombinator.com/item?id=42694570 :
> Most datacenters have no way to return their boiled, sterilized, [demineralized] water for water treatment, and so they don't give or sell datacenter waste water back, it takes heat with it when it is evaporated.
> "Ask HN: How to reuse waste heat and water from AI datacenters?" (2024) https://news.ycombinator.com/item?id=40820952
A sustainable thermofluid would increase the efficiency and sustainability of heat recovery and reuse operations
Show HN: Discover and fund the open source projects your code depends on
Built a Claude Code skill to run on your repo, figure out your dependencies and provide links to donate to the OSS your project relies on
Just run in your Terminal
curl -o ~/.claude/commands/tribute.md \ https://raw.githubusercontent.com/jshchnz/tribute/main/tribu...
Then run in your Claude Code
/tribute
Maybe:
Generate a Pull Request to suggest a FUNDING.yml
Notes re: FUNDING.yml (GitHub,) and WebMonetization from this comment re: "Awesome Donations: A repository of FLOSS donation options" https://news.ycombinator.com/item?id=42585270
Show HN: Claude Code for Django
Chris Wiles showcased his setup for Claude Code and I thought it was sick. So I adapted it for Django projects. Several skills have been added to address the pain points in Django development.
Will this work with GitHub Copilot in vscode; and with which coding and general purpose models? https://www.reddit.com/r/GithubCopilot/comments/1pn4669/vsco... :
> With the above experimental setting `Use Claude Skills`, now the agents are aware of any skills in `.claude/skills/` folder without being prompted:
"GitHub Copilot now supports Agent Skills" (2025-12) https://news.ycombinator.com/item?id=46322819 :
> Agent Skills [Anthropic] is now an open standard
agentskills/agentskills: https://github.com/agentskills/agentskills
Agentskills specification: https://agentskills.io/specification
But what about .claude/agents and .claude/hooks?
Roo Code has its own directory for AGENTS.md files
I honestly don't know because I never used Copilot. Hopefully we'll have some open source projects that can take one set of configurations and produce similar set of files for others.
Why AI is pushing developers toward typed languages
Python is at least as typed as Lua.
It's talking about Luau (gradually typed, https://luau.org/), not Lua.
Hopefully https://github.com/astral-sh/ty will make the Python typing situation better, but absent that, Python typing is... not great. Honestly even with that it feels subjectively very finicky.
icontract or pycontracts -like DbC Design-by-Contract type and constraint checking at runtime with or as fast as astral-sh/ty would make type annotations useful at runtime
"Support runtime checking" : https://github.com/astral-sh/ty/issues/867 :
> [ typeguard, beartype, trycast; mypyc ]
mypyc/mypyc: Compile type annotated Python to fast C extensions https://github.com/mypyc/mypyc src: https://github.com/python/mypy/tree/master/mypyc .. docs: https://mypyc.readthedocs.io/en/latest/ :
mypyc docs > Using type annotations > Strict runtime type checking: https://mypyc.readthedocs.io/en/latest/using_type_annotation... :
> Mypyc ensures type safety both statically and at runtime. [...] `Any` types and erased types in general can compromise type safety, and this is by design. Inserting strict runtime type checks for all possible values would be too expensive and against the goal of high performance.
Oh my!
beartype docs: https://beartype.readthedocs.io/en/latest/ :
> Welcome to the Bearpedia
trycast: https://github.com/davidfstr/trycast :
from typing import TypedDict, LiteralSecurely sending query parameters in HTTP headers
> Abstract: This document defines HTTP headers that enable browsers to pass redirect parameters securely during HTTP redirects without exposing them in URLs. The `Redirect-Query` header carries parameters traditionally sent via URL query strings, the `Redirect-Origin` header provides browser-verified origin authentication, and the `Redirect-Path` header enables path-based redirect validation. These headers address security and privacy concerns in authentication and authorization protocols such as OAuth 2.0 and OpenID Connect.
draft-hardt-httpbis-redirect-headers.md: https://github.com/dickhardt/redirect-headers/blob/main/draf...
Does this mean that revisions to for example, the OAuth2 and OIDC protocols will be needed; or shouldn't there at least be a note about the concerns of "HTTP Redirect Headers" draft-hardt-httpbis-redirect-headers ? https://github.com/dickhardt/redirect-headers/blob/main/draf...
Open issues:
- "Use of unsafe/unsecure headers (under Fetch)" https://github.com/dickhardt/redirect-headers/issues/2 :
> All headers with the Sec- and Proxy- prefixes are forbidden request-headers. This rule also provides backwards compatibility as it ensures that newly introduced forbidden request-headers are forbidden in older browser. So, you probably want to rename Request-Origin to `Sec-Request-Origin`, at least
How to review this as an IETF RFC?
Lots of discussion in the OAuth mailing group about the implications for OAuth/OIDC. The thread starts here: https://mailarchive.ietf.org/arch/msg/oauth/FFkUlOiz7I4K03pq...
> How to review this as an IETF RFC?
Suggest joining the OAuth mailing list and responding there, or creating a PR against the repo (but I'd first read the discussion on the mailing list thread to avoid duplication).
The Jeff Dean Facts
Hey! I created Jeff Dean Facts! Not the jokes themselves, but the site that collected them.
It was in 2008 I think (give or take a year, can't remember). I worked at Google at the time. Chunk Norris Facts was a popular Internet meme (which I think later faded when he came out as MAGA, but I digress...). A colleague (who wishes to remain anonymous) thought the idea of Jeff Dean Facts would be funny, and April 1st was coming up.
At the time, there was a team working on an experimental web app hosting platform code named Prometheus -- it was later released as App Engine. Using an early, internal build I put together a web site where people could submit "facts" about Jeff Dean, rate each other's facts on a five-star scale, and see the top-rated facts. Everything was anonymous. I had a few coworkers who are funnier than me populate some initial facts.
I found a few bugs in Prometheus in the process, which the team rapidly fixed to meet my "launch date" of April 1st. :)
On the day, which I think was a Sunday, early in the morning, I sent an email to the company-wide "misc" mailing list (or maybe it was eng-misc?) from a fake email address (a google group alias with private membership), and got the mailing list moderator to approve it.
It only took Jeff an hour or two to hack his way through the back-end servers (using various internal-facing status pages, Borg logs, etc.) to figure out my identity.
But everyone enjoyed it!
My only regret is that I targeted the site specifically at Jeff and not Sanjay Ghemawat. Back then, Jeff & Sanjay did everything together, and were responsible for inventing a huge number of core technologies at Google (I have no idea to what extent they still work together today). The site was a joke, but I think it had the side effect of elevating Jeff above Sanjay, which is not what I intended. Really the only reason I targeted Jeff is because he's a bit easier to make fun of personality-wise, and because "Jeff Dean Facts" sort of rolls off the tongue easier that "Sanjay Ghemawat Facts" -- but in retrospect this feels a little racist. :(
My personal favorite joke is: Jeff Dean puts his pants on one leg at a time, but if he had more than two legs, you'd see his approach is actually O(log n).
Solar hydrogen can now be produced efficiently without the scarce metal platinum
ScholarlyArticle:
"Highly Efficient Platinum-Free Photocatalytic Hydrogen Evolution From Low-cost Conjugated Polymer Nanoparticles" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adma.20... :
> Abstract: While the interest in hydrogen photocatalysis from organic semiconductors is rapidly growing, there is a necessity to achieve hydrogen production without platinum (Pt), considering its price, availability and toxicity. In this work, this is demonstrated that high hydrogen evolution reaction (HER) efficiencies can be achieved without the use of Pt. A series of low-cost conjugated polymers are designed around the dibenzothiophene-S,S-sulfoxide (BTSO) unit, and self-assembled as nanoparticles in water via the nanoprecipitation technique. This is highlighted that how side chain engineering, nanoparticle morphology and pH influence the hydrogen evolution rate. Optoelectronic properties are improved through a Donor-Acceptor structure, resulting in an unprecedented hydrogen evolution reaction rate of 209 mmol g−1 h−1 in the absence of Pt. A clear correlation between high efficiencies and number of BTSO units within the polymer backbone can be established. The design rules pioneer the design of future organic materials is presented for a cost-efficient and sustainable hydrogen photocatalysis.
But it cannot be transported or stored in any reasonable way. Hydrogen is a dead-end fake green fuel pushed by petroleum companies because the cheapest and easiest way to make it is from petroleum so adopting it would mean there’s always a market for oil.
- "Sodium borohydride a better Hydrogen storage, transport solution than Ammonia" (2025) https://news.ycombinator.com/item?id=46279149
- Hydrogen fueling stations and processing facilities must be ventilated and brushless.
Adding Support for ARM MTE Debugging to QEMU
Also re: ARM MTE Memory Tagging Extension:
- "Cage: Hardware-Accelerated Safe WebAssembly" (2024) https://news.ycombinator.com/item?id=46151170
Electronic nose for indoor mold detection and identification
Usually you have to spend thousands of dollars for an expert with the right equipment to focus on select areas of your home (areas that are suspect).
Are there ways to do a full-house scan yourself?
What sort of an indoor drone could this sensor be most usefully added to?
UAS move a lot of air - I can't imagine this would be practical. At the very least, you'd need to put the sensor on a boom long enough to make it unwieldy.
Many of the described robots in the mujoco_menagerie gallery could probably do the job: https://github.com/google-deepmind/mujoco_menagerie
The sensor recovery time limits the sampling time. A 28 second sensor recovery time with SnO2-Gr (graphene oxide) would be more useful for sampling larger volumes than minutes with just SnO2 FWIU; https://news.ycombinator.com/context?id=46521695
Looks like adding Graphene to SnO2 (Tin Oxide) sensors increases sensitivity and battery life and reduces operating temperature to room temperature, for an increase in manufacturing complexity and cost.
"Extraordinary Improvement of Gas-Sensing Performances in SnO2 Nanofibers Due to Creation of Local p–n Heterojunctions by Loading Reduced Graphene Oxide Nanosheets" (2015) https://pubs.acs.org/doi/10.1021/am5071656
"Highly Sensitive and Selective SnO2-Gr Sensor Photoactivated for Detection of Low NO2 Concentrations at Room Temperature" (2024) https://www.mdpi.com/2079-4991/14/24/1994 ; UV photo activation of SnO2-Gr sensors
"Engineering of SnO2–Graphene Oxide Nanoheterojunctions for Selective Room-Temperature Chemical Sensing and Optoelectronic Devices" (2020) https://pubs.acs.org/doi/10.1021/acsami.0c09178
"Humidity-sensing Performance of Graphene/SnO 2 Nanocomposites" (2025) https://sensors.myu-group.co.jp/sm_pdf/SM4049.pdf ; 28s recovery time instead of minutes
New California law requires a working fridge in all apartments
Also a good idea: ask chemistry and engineering majors to develop a more sustainable refrigerant and/or refrigerator
There already exists one: CO2.
Spherical Snake
Reminds me of Uncle Worm, among the better TI-83+ games. It's 2D, but the snake is curvy:
That says 2004.
Snake (video game genre) https://en.wikipedia.org/wiki/Snake_(video_game_genre) has a picture of Hyper-Wurm on a TRS-80, which looks curved or spherical
Is there already a Snake on a Plane?
'Rock candy' technique offers simpler way to capture carbon directly from air
From "Passive direct air capture via evaporative carbonate crystallization" (2025) https://www.nature.com/articles/s44286-025-00308-5 :
> Abstract: [...] This passive, single-chemical-loop approach has the potential to reduce capital and levelized costs by approximately 42% and 32%, respectively, compared with conventional liquid-based direct air capture systems.
What is their projected cost per ton?
From 2025-09 https://news.ycombinator.com/item?id=45282882 :
> A techno-economic analysis estimates a levelized cost of capture of ~$70/tonneCO2 [with this membraneless electrochemical approach], compared to $137/tonneCO2 for conventional EMAR
> [ $50 ]
From 2025-11 https://news.ycombinator.com/item?id=46010414 :
> I just saw $26/ton for (non-CO2) carbon capture in 2025. Gravel is like $10-$50/ton.
From 2025-: https://www.nature.com/articles/s41893-025-01696-5 :
> Using uncertainty-aware cost modelling, including membrane cost, electricity prices, contingency factors and learning curves, we show that capture costs can reach US $50–100 per ton CO2 for natural gas power plants and as low as US $25–50 per ton CO2 for coal and cement plants, positioning this technology favourably against state-of-the-art capture processes.
But then the usability of the captured carbon;
What is more reusable than CO2-derived graphene filters caked in CO2?
Given sequestered carbon in a useful form, what products can be made?
In a post fossil fuel world carbon capture with either plants or something mechanical would be necessary to make the things we make now with fossil fuels. We would still need fuels compatible with gas, diesel, and jet fuel, still need plastic monomers, feedstocks for pharmaceuticals, etc.
Which plants absorb the most carbon?
Phytoplankton, Seagrass meadows; Redwoods, Mangroves, Peat bogs (Sphagnum peat moss)
Algae absorbs more CO2 than plants, but it's only sequestered if harvested and used to produce long lasting products.
Algae store solar energy as triglyceride lipids; triacylglycerols (TAGs). Fuel, cooking oils, and Omega-3 dietary supplements can be made from algae.
Is low-pressure liquid processing of [carbon] the least risky option?
Which plastics can't be functionally replaced with a bio substitute composed of materials like: lignin, cellulose, lignin vitrimer, algae, cornstarch/tapioca, gelatin, fractionated biofeedstock, carbonized lignin (carbon ceramics), graphene, vinegar, water, nitrogen, and co2 ?
> You cannot use the listed materials to replace plastics whose function relies on Fluorine (F), Silicon (Si), or Benzene-ring transparency.
A challenge: It says there's no sustainable alternative to C-F bonds, transparent high impact plastics, o durable rubber that withstands thermal ranges -50C to 250C and oil exposure.
TIL about fluorinase and tea (and simple C-F bonds in nature)
TIL about a new process for bio- silica-based optics; Rice Husk Ash into optical-grade Silica with acid leaching: "Soak [rice] husks in hot, dilute acid (Hydrochloric or Citric acid) for 1–2 hours" before pyrolysis at 600-700C, then add NaOH to make water glass (Na2SiO3) and water and then add Sulfuric or Carbonic acid to extract the refined SiO2.
That's funny; water glass is used for (ancient) geopolymers fwiu
Coral, diatoms
Show HN: Enroll, a tool to reverse-engineer servers into Ansible config mgmt
Happy new year folks!
This tool was born out of a situation where I had 'inherited' a bunch of servers that were not under any form of config management. Oh, the horror...
Enroll 'harvests' system information such as what packages are installed, what services are running, what files have 'differed' from their out-of-the-box defaults, and what other custom snowflake data might exist.
The harvest state data can be kept as its own sort of SBOM, but also can be converted in a mere second or two into fully-functional Ansible roles/playbooks/inventory.
It can be run remotely over SSH or locally on the machine. Debian and Redhat-like systems are supported.
There is also a 'diff' mode to detect drift over time. (Years ago I used Puppet instead of Ansible and miss the agent/server model where it would check in and re-align to the expected state, in case people were being silly and side-stepping the config management altogether). For now, diff mode doesn't 'enforce' but is just capable of notification (webhook, email, stdout) if changes occur.
Since making the tool, I've found that it's even useful for systems where you already have in Ansible, in that it can detect stuff you forgot to put into Ansible in the first place. I'm now starting to use it as a 'DR strategy' of sorts: still favoring my normal Ansible roles day-to-day (they are more bespoke and easier to read), but running enroll with '--dangerous --sops' in the background periodically as a 'dragnet' catch-all, just in case I ever need it.
Bonus: it also can use my other tool JinjaTurtle, which converts native config files into Jinja2 templates / Ansible vars. That one too was born out of frustration, converting a massive TOML file into Ansible :)
Anyway, hope it's useful to someone other than me! The website has some demos and more documentation. Have fun every(any)-one.
Could it also detect changed package files; if there are per-package-file checksums like with `debsums` and `rpm -V`?
Does it check extended filesystem labels with e.g. getfacl for SELinux support?
I've also done this more than a few times and not written a tool.
At least once I've scripted better then regex to convert a configuration file to a Jinja2 templated configuration file (from the current package's default commented config file with the latest options). And then the need is to diff: non-executable and executable guidelines, the package default config (on each platform), and our config.
Sometimes it's better not to re-specify a default config param and value, but only if the defaults are sane on every platform. Cipher lists for example.
P2V (physical to virtual) workflows don't result in auditable system policy like this.
Most of the OS and Userspace packages backed up in full system images (as with typical P2V workflows) are exploitably out of date in weeks or months.
To do immutable upgrades with rollback, Rpm-ostree distros install the RPM packages atop the latest signed immutable rootfs image, and then layer /etc on top (and mounts in /var which hosts flatpaks and /var/home). It keeps a list of packages to reinstall and it does a smart merge of /etc. Unfortunately etckeeper (which auto-git-commits /etc before and after package upgrades) doesn't yet work with rpm-ostree distros.
Ansible does not yet work with rpm-ostree distros. IIRC the primary challenge is that ansible wants to run each `dnf install` individually and that takes forever with rpm-ostree. It is or is not the same to install one long list of packages or to install multiple groups of packages in the same sequence. It should be equivalent if the package install and post-install scripts are idempotent, but is not equivalent if e.g. `useradd` is called multiply without an explicit UID in package scripts which run as root too.
I wrote a PR to get structured output (JSON) from `dnf history`, but it was for dnf4.
From https://news.ycombinator.com/item?id=43617363 :
> upgrading the layered firefox RPM without a reboot requires -A/--apply-live (which runs twice) and upgrading the firefox flatpak doesn't require a reboot, but SELinux policies don't apply to flatpaks which run unconfined FWIU.
Does it log a list of running processes and their contexts; with `ps -Z`?
There are also VM-level diff'ing utilities for forensic-level differencing.
Hi westurner!
> Could it also detect changed package files; if there are per-package-file checksums like with debsums and `rpm -V`?
Yes, that's exactly what it does. See https://git.mig5.net/mig5/enroll/src/branch/main/enroll/plat... and https://git.mig5.net/mig5/enroll/src/branch/main/enroll/rpm....
It also tries to ignore packages that came with the distro automatically, e.g focusing on stuff that was explicitly installed (based on 'apt-mark showmanual' for Debian, and 'dnf -q repoquery --userinstalled' (and related commands, like dnf -q history userinstalled) for RH-like)
> Does it check extended filesystem labels with e.g. getfacl for SELinux support?
Not yet, but that's interesting, I'll look into it.
> At least once I've scripted better then regex to convert a configuration file to a Jinja2 templated configuration file (from the current package's default commented config file with the latest options).
Yep, that was the inspiration for my companion tool https://git.mig5.net/mig5/jinjaturtle (which enroll will automatically try and use if it finds it on the $PATH - if it can't find it, it will just use 'copy' mode for Ansible tasks, and the original files).
Note that running the `enroll manifest` command against multiple separate 'harvests' (e.g harvested from separate machines) but storing it in the same common manifest location, will 'merge' the Ansible manifests. Thereby 'growing' the Ansible manifest as needed. But each host 'feature flips' on/off which files/templates should be deployed on it, based on what was 'harvested' from that host.
> Does it log a list of running processes and their contexts; with `ps -Z`?
It doesn't use ps, but it examines systemctl to get a list of running services and also timers. Have a look at https://git.mig5.net/mig5/enroll/src/branch/main/enroll/syst...
Thanks for the other ideas! I'll look into them.
Thanks for your reply. As well; otoh:
Does it already indirectly diff the output of `systemd-analyze security`?
Would there be value to it knowing the precedence order of systemd config files? (`man systemd.unit`)
How to transform the generated playbooks to - instead of ansible builtins - use a role from ansible-galaxy to create users for example?
How to generate tests or stub tests (or a HEALTHCHECK command/script, or k8s Liveness/Readiness/Startup probes, and/or a Nagios or a Prometheus monitoring config,) given ansible inventory and/or just enroll?
Ansible Molecule used to default to pytest-testinfra for the verify step but the docs now mention an ansible-native way that works with normal inventory that can presumably still run testinfra tests as a verify step. https://docs.ansible.com/projects/molecule/configuration/?h=...
MacOS: honebrew_tap_module, homebrew_module, homebrew_cask_module, osx_defaults_module
Conda (Win/Mac/Lin, AMD64, ARM64, PPC64, RISC-V 64 (*), WASM)
CycloneDX/cyclonedx-python generates SBOMs from venv, conda, pip requirements.txt, pipenv, poetry, pdm, uv: https://github.com/CycloneDX/cyclonedx-python
Container config: /var, $DOCKER_HOST, Podman, Docker, $KUBECONFIG defaults to ~/.kube/config (kube config view), Podman rootless containers
Re: vm live migration, memory forensics, and diff'ing whole servers:
Live migration and replication solutions already have tested bit-level ~diffing that would also be useful to compare total machine state between 2 or more instances. At >2 nodes, what's anomalous? And how and why do the costs of convergence-based configuration management differ from golden image -based configuration management?
E.g. vmdiff diffs VMs. The README says it only diffs RAM on Windows. E.g. AVML and linpmem and volatility3 work with Linux.
/? volatility avml inurl:awesome https://www.google.com/search?q=volatiloty+avml+inurl%3Aawes...
Resistance training load does not determine hypertrophy
What about Time Under Tension?
"Equalization of Training Protocols by Time Under Tension Determines the Magnitude of Changes in Strength and Muscular Hypertrophy" (2022) https://journals.lww.com/nsca-jscr/fulltext/2022/07000/equal... :
> Abstract: [...] In conclusion, training protocols with the same TUT promote similar strength gains and muscle hypertrophy. Moreover, considering that the protocols used different numbers of repetitions, the results indicate that training volumes cannot be considered separately from TUT when evaluating neuromuscular adaptations.
So could I just do one super slow (some minutes) squat per week at like 60% and get all the benefits still?
I’m not at all a biology expert, but if the squat is actually pushing you somewhat close to your limit (it’s not super easy), you’ll definitely get stronger. Case in point: isometric exercises. Also: folks who do planks for a few weeks/months.
Who invented the transistor?
I typed this into Gemini, "who invented the transistor?" and it correctly cites "John Bardeen, Walter Brattain, and William Shockley at Bell Telephone Laboratories in December 1947".
Who invented the electric fence gate?
How does the electric fence gate lead to transistors?
Relay > History: https://en.wikipedia.org/wiki/Relay :
> [1809, 1835, 1837, 1840 (Samuel Morse; Morse telegraph) ]
Electric gate > Electric Gate History: https://en.wikipedia.org/wiki/Electric_gate :
> One of the first electric gates was invented by a Canadian Fred W. Watson in 1881. It was designed to be used for railway systems ... “A catch connected with an electro-magnet keeps a gate closed,” reported The National Tribune on October 9, 1884. [3]
Flip-flop (electronics) > History: https://en.wikipedia.org/wiki/Flip-flop_(electronics)
Quickemu: Quickly create and run optimised Windows, macOS and Linux VMs
IOMMU GPU passthrough with device selection would be a helpful feature: https://www.google.com/search?q=gpu+passthrough+qemu
LXD manages qemu VMs and supports snapshotting, live migration, and a number of storage drivers: https://news.ycombinator.com/item?id=45270468
virtio-gpu-rutabaga works with Android VMs on qemu, but does it work with Win/Mac/Lin: https://news.ycombinator.com/item?id=42921315
I would love that so much. That's the feature I wanted to play for the longest while, but the shortage of time just doesn't let me.
That would be a jixe step up.
[dead]
Designing Predictable LLM-Verifier Systems for Formal Method Guarantee
[flagged]
> intersection of LLMs and formal verification
/? TLA LLM https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... : 1 submission, ~20 comments
"AI will make formal verification go mainstream" (2025-12) https://news.ycombinator.com/item?id=46294574
PEP 761 – Deprecating PGP signatures for CPython artifacts (2024)
Dislike. Don't put all your eggs in one basket.
I am aware of gpg.fail; https://news.ycombinator.com/item?id=46403200
Have they yet eliminated the single points of failure from Sigstore (i.e. the centralized database)?
From the PEP: https://peps.python.org/pep-0761/#support-for-offline-verifi... :
> During the pre-PEP discussion, there was a question of whether offline verification was supported by Sigstore. Using a Sigstore bundle (.sigstore) file, Sigstore clients support verifying the artifact completely offline.
> Using offline verification with Sigstore requires disabling root of trust updates and “pinning” a root of trust in a file to use during verification.
> [...]
> Offline verification also makes revocation checks impossible, but this is similar to PGP’s model where revocation of keys requires an online lookup.
How does this compare to CRL and OCSP for key revocation?
Fairly certain this just reinvents the wheel with less years of review
Synchronizing CT Certificate Transparency logs to browsers is apparently considered infeasible. Merkle Certificates may help with this too?
Mostlymatter: A fork of Mattermost by Framasoft
https://framagit.org/framasoft/framateam/mostlymatter says last updated in 2024?
Haven't there been CVEs in the product since they forked it in 2024?
/? CVE mattermost https://www.google.com/search?q=cve+mattermost :
https://www.cvedetails.com/vulnerability-list/vendor_id-2145...
There's also Mattermost-LDAP, though it doesn't look like there's a support contract for when compliance is important: https://github.com/Crivaledaz/Mattermost-LDAP
IIRC there are also 3rd party SSO/LDAP/AD adapters for GitLab?
Startups Aim to Integrate Radio Cables with GPUs
NewsArticle: "AI Data Centers Demand More Than Copper Can Deliver: Radio and terahertz links could be better, faster, and cheaper" (2025) https://spectrum.ieee.org/rf-over-fiber
What about graphene and CNT cabling instead?
How do the power requirements for carbon-based cabling differ from those for fiber and copper?
IIRC there are already HV cables similar to these for High Voltage Power Line power transmission applications?
"Core-sheath composite electric cables with highly conductive self-assembled carbon nanotube wires and flexible macroscale insulating polymers for lightweight, metal-free motors" (2025) https://link.springer.com/article/10.1007/s42114-025-01302-4
"Ultrastrong Carbon Nanotubes–Copper Core–Shell Wires with Enhanced Electrical and Thermal Conductivities as High-Performance Power Transmission Cables" (2022) https://pubs.acs.org/doi/abs/10.1021/acsami.2c13686 .. https://scholar.google.com/scholar?cites=1535585233454942914...
"Highly conductive hybrid carbon nanotube fibers: Strategies and future directions for replacing copper with next-generation conductors" (2025) https://www.sciencedirect.com/science/article/abs/pii/S13598...
Rex is a safe kernel extension framework that allows Rust in the place of eBPF
As a lover of Rust, ooo boy does this sound like a bad idea. The Rust compiler is not guaranteed to always output safe code against malicious inputs given that there’s numerous known soundness bugs that allow exploiting this. Unless I’m missing something this is a security nightmare of an idea.
Also there’s reasons why eBPF programs aren’t allowed to run arbitrarily long and this just ignores that problem too.
I asked about this when they presented the project at the Linux Plumbers conference. They replied that it's not really intended to be a security boundary, and that you should not let anyone malicious load these programs.
Given this thread model, I think their project is entirely reasonable. Safe Rust will prevent accidental mistakes even if you could technically circumvent it if you really try.
As I understand it eBPF has also given up on that due to Spectre. As a result you need root to use it on most distros anyway, and the kernel devs aren't going to expand its use (some systems are stuck on cBPF).
So it's not like eBPF is secure and this isn't. They're both insecure in different ways.
So eBPF for a WAF isn't worth it?
re: eBPF and WAFs: https://news.ycombinator.com/item?id=45951011
From https://news.ycombinator.com/context?id=43564972 :
> Should a microkernel implement eBPF and WASM, or, for the same reasons that justify a microkernel should eBPF and most other things be confined or relegated or segregated in userspace; in terms of microkernel goals like separation of concerns and least privilege and then performance?
"Isolated Execution Environment for eBPF" (2025-04) https://news.ycombinator.com/item?id=43697214
"ePass: Verifier-Cooperative Runtime Enforcement for eBPF" (2025-12) https://ebpf.foundation/epass-verifier-cooperative-runtime-e... .. https://news.ycombinator.com/item?id=46412121
Show HN: Mysti – Claude, Codex, and Gemini debate your code, then synthesize
Hey HN! I'm Baha, creator of Mysti.
The problem: I pay for Claude Pro, ChatGPT Plus, and Gemini but only one could help at a time. On tricky architecture decisions, I wanted a second opinion.
The solution: Mysti lets you pick any two AI agents (Claude Code, Codex, Gemini) to collaborate. They each analyze your request, debate approaches, then synthesize the best solution.
Your prompt → Agent 1 analyzes → Agent 2 analyzes → Discussion → Synthesized solution
Why this matters: each model has different training and blind spots. Two perspectives catch edge cases one would miss. It's like pair programming with two senior devs who actually discuss before answering.
What you get: * Use your existing subscriptions (no new accounts, just your CLI tools) * 16 personas (Architect, Debugger, Security Expert, etc) * Full permission control from read-only to autonomous * Unified context when switching agents
Tech: TypeScript, VS Code Extension API, shells out to claude-code/codex-cli/gemini-cli
License: BSL 1.1, free for personal and educational use, converts to MIT in 2030 (would love input on this, does it make sense to just go MIT?)
GitHub: https://github.com/DeepMyst/Mysti
Would love feedback on the brainstorm mode. Is multi-agent collaboration actually useful or am I just solving my own niche problem?
Anyone knows of something similar but for terminal?
Update:
I've already found a solution based on a comment, and modified it a bit.
Inside claude code i've made a new agent that uses the MCP gemini through https://github.com/raine/consult-llm-mcp. this seems to work!
Claude code:
Now let me launch the Gemini MCP specialist to build the backend monitoring server:
gemini-mcp-specialist(Build monitoring backend server) ⎿ Running PreToolUse hook…
https://github.com/just-every/code "Every Code - push frontier AI to it limits. A fork of the Codex CLI with validation, automation, browser integration, multi-agents, theming, and much more. Orchestrate agents from OpenAI, Claude, Gemini or any provider." Apache 2.0 ; Community fork;
just-every/code: https://github.com/just-every/code ... https://news.ycombinator.com/item?id=44959671
How sustainability is driving innovation in functionalized graphene materials
ScholarlyArticle: "Green Mechanochemical Production of Amino-Acid-Derived N-Doped Graphene for Functional Vitrimer Composites" (2025) https://pubs.acs.org/doi/10.1021/acssuschemeng.5c09378
Letter from the authors: "How sustainability is driving innovation in functionalized graphene materials" https://phys.org/news/2025-12-sustainability-functionalized-...
New reactor produces clean energy and carbon nanotubes from natural gas
A big research area, see "Turquoise Hydrogen"
https://www.aga.org/its-time-to-pay-attention-to-turquoise-h...
in contrast to "Grey Hydrogen" [1] made by steam reforming
https://en.wikipedia.org/wiki/Steam_reforming
The self-taught ChemE in me worries a little about any process that makes a solid product since that product could plate out inside the machine and clog it up, but maybe that's not really a problem here.
[1] "Blue" if you capture the CO2
[flagged]
It's funny, what he posts is similar to mine except I blend in some stuff which is "characteristic of the HN front page" (rust and it's discontents) and no sports and political science but the science and engineering themes are similar.
Here's a log of what I post: https://westurner.github.io/hnlog/
Which research themes do they share in common?
[flagged]
> If lignin is not enough to make the inflamed CNTs char instead of ~aerosolize, is the phosphorous in phytic acid would encase the CNT in phosphorus and char.
If lignin is not enough to make the inflamed CNTs char instead of ~aerosolize, would the phosphorous in phytic acid encase the CNT in phosphorus and char (to prevent the health hazards of CNT if burnt)
Is "aerosolized" the word? How could you correct me to help us understand this?
I don’t understand how HN works I guess; I submitted this exact article 24 hours ago, yet the hivemind has yet to call this a dupe. Not complaining, just truly don’t get it. When I submit a dupe it tells me?
Could be the time of day?
dupe: https://news.ycombinator.com/item?id=46368776
It didn't show any matching posts when I shared the URL.
ENH: HN: search for matching articles on debounced update to the submit URL field
Math seems wrong "The team found that the loop design would convert 75% of the gas entering the system into useful resources, producing carbon nanotubes and hydrogen in a 3:1 mass ratio. In other words, for every 4 kilograms of methane the system successfully converts into useful resources, it makes 3 kilograms of nanotubes and 1 kilogram of hydrogen."
The 75% and the 3:1 ratio are not related. Methane has the formula CH4, so for 12 grams of carbon you have 4 grams of hydrogen. If you successfully break down the molecule CH4 you get a carbon-hydrogen ratio of 3:1. Now, let's say you start with 5.33 kg of methane. Only 75% gets converted, so that's 4 kg. Of that, you get 3 kg of carbon and 1 kg of hydrogen.
ScholarlyArticle: "Production of hydrogen and carbon nanotubes from methane using a multi-pass floating catalyst chemical vapour deposition reactor with process gas recycling" (2025) https://www.nature.com/articles/s41560-025-01925-3
Oh come on. Produces 'clean energy' from natural gas? Yeah of course.
It has nothing to do with clean energy, other than the downstream effects of cheap CNTs should the process be refined enough to scale and commercialize. The hydrogen is recycled in the process. The primary thing that it produces are CNT aerogels. However according to the paper catalyst efficiency is shit. Says less than 0.1% of catalyst particles actually grew CNTs. No wonder CNTs are currently ≥$200/kg. Needs improvement by either dramatically increasing catalyst efficiency or finding dirt cheap iron/sulfur sources.
Carbon fouling is also a major block to scale. 15-20% of carbon deposits as soot on reactor walls. At a 1MW scale thats 15-30 kg/h of crud degrading the catalytic heat transfer. Continuous cleaning or scheduled downtime would drive OPEX out of possible realities.
Hot hydrogen loops are a son-of-a-bitch and equal continuous embrittlement of pipes, valves, pumps. Seals that work at temperature. H2 Leak detection. Some real heavyweight process safety engineering here.
The reactor chemistry is solved. The paper proves it works.
The scale-up is where clean-tech startups go to burn money and die.
Would an electrochemical plasma process that takes graphene filters caked in CO2 (for e.g CNT production) be more useful?
Aluminum red mud is 40% iron.
Is hydrogen useful for plasma enhanced CVD?
Are there electrical plasma improvements to CVD specifically for CNT carbon nanotube production?
What optimizations of CVD produce nonmetallic aligned carbon nanotubes (with band gaps useful for semiconductor production for FET field-effect transistors, and integrated optical components)?
From gemini3pro, for human consideration;
> [ PECVD: Plasma-enhanced CVD] allows VA-CNT synthesis at temperatures as low as 450–650°C
> High-flux hydrogen (H_2) carrier gas is used in floating-catalyst CVD (FCCVD) to reduce the number of nuclei, favoring isolated semiconducting nanotubes over bundled metallic ones.
> Electric Field Alignment: PECVD uses the built-in electric field of the plasma sheath to guide nanotubes into vertical or horizontal alignment as they grow.
> [ Kite growth CVD with nonmetallic seeds like nanodiamond grow in tip-growth mode ]
Which would be useful for FET in Carbon-based chips
Couldn't hydrogen (cold) plasma clean a CVD reaction chamber?
Snitch – A friendlier ss/netstat
When I saw this headline I assumed it was Little Snitch an existing network monitor and firewall for Macs.
Might need a different name.
Wow that's so nice, would there be an equivalent for PC? (Windows or Linux)
dotfiles/scripts/netstatpsutil.py: https://github.com/westurner/dotfiles/blob/develop/scripts/n...
Textual or similar for a top-like mode would be cool someday
scripts/lsof.sh does lsof from /proc/*: https://github.com/westurner/dotfiles/blob/develop/scripts/l...
CO2 batteries that store grid energy take off globally
> The tried-and-true grid-scale storage option—pumped hydro [--> https://spectrum.ieee.org/a-big-hydro-project-in-big-sky-cou... ], in which water is pumped between reservoirs at different elevations—lasts for decades and can store thousands of megawatts for days.
> Media reports show renderings of domes but give widely varying storage capacities [--> https://www.bloominglobal.com/media/detail/worlds-largest-co... ]—including 100 MW and 1,000 MW.
It looks like the article text is using the wrong unit for energy capacity in these contexts. I think it should be megawatt-hours, not megawatts. If this is true, this is a big yikes for something coming out of the Institute of Electrical and Electronics Engineers.
> big yikes for something coming out of the Institute of Electrical and Electronics Engineers.
Besides the unit flub, there's an unpleasant smell of sales flyer to the whole piece. Hard data spread all over, but couldn't find efficiency figures. Casual smears such as "even the best new grid-scale storage systems on the market—mainly lithium-ion batteries—provide only about 4 to 8 hours of storage" (huh, what, why?). I could also have used an explanation of why CO2, instead of nitrogen.
> provide only about 4 to 8 hours of storage" (huh, what, why?)
Because the most efficient way to make money with a lithium ion battery (or rather the marginal opportunity after the higher return ones like putting it in a car are taken) is to charge it in the few hours of when electricity is cheapest and discharge it when it is most expensive, every single day, and those windows generally aren't more than 8 hours long...
Once the early opportunities are taken lower value ones will be where you store more energy and charge and discharge at a lower margin or less frequently will be, but we aren't there yet.
Advertising that your new technology doesn't do this is taking a drawback (it requires a huge amount of scale in one place to be cost competitive) and pretending it's an advantage. The actual advantage, if there is one, is just that at sufficient scale it's cheaper (a claim I'm not willing to argue either way).
It ought to be cheaper at scale. Batteries' cost scales linearly with storage capacity. Cost for a plant like this scales linearly with the storage rate - the compressor and turbine are the expensive part, while the pressure vessels and gas bags are relatively cheap.
The bigger you build it, the less it costs per MWh of storage.
> Energy Dome expects its LDES solution to be 30 percent cheaper than lithium-ion.
Grid scale lithium is dropping in cost about 10-20% per year, so with a construction time of 2 years per the article lithium will be cheaper by the time the next plant is completed
LDES: Long-Duration Energy Storage
Grid energy storage: https://en.wikipedia.org/wiki/Grid_energy_storage
Metrics for LDES: Levelized Cost of Storage (LCOS), Gravimetric Energy Density, Volumetric Energy Density, Round-Trip Efficiency (RTE), Self-Discharge Rate, Cycle Life, Technical Readiness Level (TRL), Power-to-Energy Decoupling, Capital Expenditure (CAPEX), R&D CapEx, Operational Expenditure (OPEX), Charging Cost, Response Time, Depth of Discharge, Environmental & Social Governance (ESG) Impact
Li-ion and even LFP batteries degrade; given a daily discharge cycle, they'll be at 80% capacity in 3 years. Gas pumps and tanks won't lose any capacity.
Lithium burns toxic. Carbon based solid-state batteries that don't burn would be safe for buses.
There are a number of new methods for reconditioning lithium instead of recycling.
Biodegradable batteries would be great for many applications.
You can recycle batteries at big box stores; find the battery recycling box at Lowes and Home Depot in the US.
Synthetic pathway for biocatalysis of formate from electrochemically reduced CO2
ScholarlyArticle: "A synthetic cell-free pathway for biocatalytic upgrading of formate from electrochemically reduced CO2" (2025) https://www.nature.com/articles/s44286-025-00315-6
NewsArticle: "Scientists Engineer Synthetic Metabolism That Eats CO2 Waste" (2025) https://scienceblog.com/scientists-engineer-synthetic-metabo... :
> The system, called ReForm, takes formate (a simple liquid made from captured carbon dioxide) and transforms it into acetyl-CoA, the molecular currency that sits at the center of every living cell’s metabolism
Green Production of Amino-Acid-Derived N-Doped Graphene for Vitrimer Composites
ScholarlyArticle: "Green Mechanochemical Production of Amino-Acid-Derived N-Doped Graphene for Functional Vitrimer Composites" (2025) https://pubs.acs.org/doi/10.1021/acssuschemeng.5c09378
NewsArticle: "This solvent-free process makes graphene both conductive and easy to disperse" https://interestingengineering.com/science/solvent-free-proc...
I've been working on a green chip fabrication concept with AI that's consumed more and more of my thoughts lately.
Whether wafers can be made out of Lignin-Vitrimer (which I learned about from NREL's work), and what are the advantages and disadvantages.
Advantages include cost and sustainability and aromatic carbon rings that can be (LCS) lased into laser-induced graphene. Disadvantages include the likelihood of wafers cupping or bowing, and an unfortunate shortage of highly-refined Lignin.
From what I've been reading (from AI and ScholarlyArticles and NewsArticles and Wikipedia), there are so many uses for Lignin that we should send an APB to the tree pulp quarterly about just buying lignin refining capability for all of the tree paper pulp factories.
I had - as a vibe physics'ed concept - Carbon Nanotube (CNT) Ink in Ethyl Lactate as the green solvent, to fill into LCS laser ablated grooves in Lignin-Vitrimer for alignment prior to locking it in with LCS (Laser Compression Shock). The model said that the Ethyl Lactate evaporating would somewhat adhere the CNT in place for lasing.
CNT Ink for this and other applications could be made with Hexyl-Cellulose or Photo-Cleavable Lignin Polymer (PCLP). Hexyl-Cellulose would leave char to vacuum/wash. PCLP would make for chips that are destroyed by UV during production at least, but a coat of regular lignin would block UV.
This concept process has foamed lignin for packaging.
Interestingly, amino acids (like in DNA) are one of the solutions that the model proposed for straining a band gap usable for transistos into carbon nanotubes. Straining centrifuge-separated nonmetallic CNT (e.g. from pyrolysis of Cellulose) with Lignin might also work. Though, in a different context the same model suggested that just centrifuging pyrolysis CVD-produced CNTs would yield 33% metallic and 66% non-metallic semiconducting carbon nanotubes, but their bandgaps aren't that wide and they're not aligned.
I got into trying to make a monochrome and then an electronic-ink -like color display out of graphene and/or carbon nanotubes. TIL that adamantane is the simplest nano diamond, and ava-adamantane has nitrogen in place of a carbon group (which is probably useful for NV centers in diamond-based quantum computers).
HBM consumes around three times the wafer capacity of DDR5 per gigabyte
You can improve the effectiveness and professionalism of your communication by spelling out initialisms the first time you use them, like this: high-bandwidth memory (HBM).
After you have spelled it out, feel free to use the initialism as much as you like for brevity.
You can look up acronyms that are contextually indicated; in this case by the word "DDR5".
HN only supports a limited number of characters in headlines.
Searching for `hbm ddr5`, for example, appears to find the definition of the acronym. Adding the word `Wikipedia` finds an article about same.
IIUC Silicon Carbide (SiC) wafers are an alternative to doped Silicon Dioxide (SiO_2) wafers, but EV chargers are built out of 6x6 SiC wafers and they're very expensive.
To move semiconductor fabrication off of scarce highly-processed commodity inputs would help to eliminate current production bottlenecks.
Go ahead, self-host Postgres
Self-hosting is more a question of responsibility I'd say. I am running a couple of SaaS products and self-host at much better performance at a fraction of the cost of running this on AWS. It's amazing and it works perfectly fine.
For client projects, however, I always try and sell them on paying the AWS fees, simply because it shifts the responsibility of the hardware being "up" to someone else. It does not inherently solve the downtime problem, but it allows me to say, "we'll have to wait until they've sorted this out, Ikea and Disney are down, too."
Doesn't always work like that and isn't always a tried-and-true excuse, but generally lets me sleep much better at night.
With limited budgets, however, it's hard to accept the cost of RDS (and we're talking with at least one staging environment) when comparing it to a very tight 3-node Galera cluster running on Hetzner at barely a couple of bucks a month.
Or Cloudflare, titan at the front, being down again today and the past two days (intermittently) after also being down a few weeks ago and earlier this year as well. Also had SQS queues time out several times this week, they picked up again shortly, but it's not like those things ...never happen on managed environments. They happen quite a bit.
Over 20 year I've had lots of clients on self-hosted, even self-hosting SQL on the same VM as the webserver as you used to in the long distant past for low-usage web apps.
I have never, ever, ever had a SQL box go down. I've had a web server go down once. I had someone who probably shouldn't have had access to a server accidentally turn one off once.
The only major outage I've had (2/3 hours) was when the box was also self-hosting an email server and I accidentally caused it to flood itself with failed delivery notices with a deploy.
I may have cried a little in frustration and panic but it got fixed in the end.
I actually find using cloud hosted SQL in some ways harder and more complicated because it's such a confusing mess of cost and what you're actually getting. The only big complication is setting up backups, and that's a one-off task.
Disks go bad. RAID is nontrivial to set up. Hetzner had a big DC outage that lead to data loss.
Off site backups or replication would help, though not always trivial to fail over.
As someone who has set this up while not being a DBA or sysadmin.
Replication and backups really aren’t that difficult to setup properly with something like Postgres. You can also expose metrics around this to setup alerting if replication lag goes beyond a threshold you set or a backup didn’t complete. You do need to periodically test your backups but that is also good practice.
I am not saying something like RDS doesn’t have value but you are paying a huge premium for it. Once you get to more steady state owning your database totally makes sense. A cluster of $10-20 VPSes with NVMe drives can get really good performance and will take you a lot farther than you might expect.
Even easier with sqlite thanks to litestream.
datasette and datasette-lite (WASM w/pyodide) are web UIs for SQLite with sqlite-utils.
For read only applications, it's possible to host datasette-lite and the SQLite database as static files on a redundant CDN. Datasette-lite + URL redirect API + litestream would probably work well, maybe with read-write; though also electric-sql has a sync engine (with optional partial replication) too, and there's PGlite (Postgres in WebAssembly)
Light intensity steers molecular assemblies into 1D, 2D or 3D structures
> They opted for azobenzene as the photoswitching unit and a barbituric acid-based merocyanine as the core responsible for hydrogen-bond-directed supramolecular polymorphism.
ScholarlyArticle: "Light-intensity-dependent out-of-equilibrium processes toward dimensionally distinct nanopolymorphs" (2025) https://www.cell.com/chem/fulltext/S2451-9294(25)00409-7 :
> ~Abstract: [...] In this study, by integrating supramolecular polymorphism with azobenzene photoisomerization, we constructed a light-driven out-of-equilibrium supramolecular system that exhibits dynamic transitions between three distinct assembly states with clearly different dimensionalities, simply by tuning the light intensity. By employing high-speed atomic force microscopy, we directly visualized and elucidated the underlying mechanisms of these dynamic structural transformations. Our findings provide a new platform for designing artificial materials that approach biological systems in adaptability and function, paving the way toward advanced, highly responsive smart molecular materials.
The Optics and Image Processing Behind Fundus Cameras
low-cost retinal imaging attachment: https://www.google.com/search?q=low-cost+retinal+imaging+att...
NIRS fundoscopy is clinically useful FWIU
Multispectral imaging (oh and faster OCT) in a portable unit would be clinically useful
Ford kills the All-Electric F-150
https://archive.ph/k2S9O for those who have read their last free article.
Interesting that Rivian seems to be doing fine in this space.
I was considering getting a Rivian and decided that in fact I would probably not allow the 24 year old dude at my local construction supply co to use a skid steer to drop a load of gravel into the bed of my $75k+ electric vehicle.
So instead I got a used Ford F150 (gas) and when the skid steer guy drops gravel into the bed I feel fine.
There is a lot to be said for that perspective. I wonder if any PMs have considered making the bed of the truck a FRU that you can swap out at home.
A modular open spec for attaching beds to trucks might be useful.
What are some possible attachments?
4-6.5' Truck Bed, Trailer, Camper, Mobile Workshop / Trade Rig, Car hauler, Bed with rack and storage and 270° awning
What all needs to be connected?
Mechanical attachment, 4WD/AWD/RWD axle and differential, CAN bus, backup can, lights
Public link: Open Truck Bed Standard Proposal https://gemini.google.com/share/1e70ae398d26 :
"Kinetic-Link" (K-Link) open spec:
> The proposed Active-AWD Trade Platform utilizes a Through-the-Road (TTR) Hybrid architecture to decouple the mechanical drivetrain while maintaining synchronized propulsion via a Vehicle Control Unit (VCU). By integrating high-topology Axial Flux or Radial-Axial (RAX) in-wheel motors, the system achieves exceptional torque density within the limited packaging of a trailer wheel well. The control strategy relies on Zero-Force Emulation, utilizing a bi-directional load cell at the hitch to modulate torque output via a PID loop, ensuring the module remains neutrally buoyant to the tow vehicle during steady-state cruising. In low-traction environments, the system transitions to Virtual AWD, employing Torque Vectoring to mitigate sway and Regenerative Braking to prevent jackknifing, effectively acting as an intelligent e-Axle retrofit. This configuration leverages 400V/800V DC architecture for rapid energy discharge and V2L (Vehicle-to-Load) site power, solving the unsprung weight damping challenges through advanced suspension geometry while eliminating the parasitic drag of traditional passive towing.
A modular truck bed could have Through-the-road TTR AWD (given a better VCU) and e.g. hub motors or an axle motor.
First monolithic 3D chip built in a U.S. foundry
> Until now, most attempts at 3D chips have relied on stacking separate chips. That approach works, but the connections between layers are coarse, sparse, and prone to bottlenecks.
> Instead of fabricating separate chips and then fusing them, the team builds each layer directly on top of the last in one continuous process. This “monolithic” method uses temperatures low enough to avoid damaging the circuitry below, allowing the researchers to stack components more tightly and connect them far more densely.
NewsArticle: "First truly 3D chip fabbed at US foundry, features carbon nanotube transistors and RAM on a single die — future devices could have up to 1000x improvement in energy-delay product" (2025) https://www.tomshardware.com/tech-industry/semiconductors/st...
Light-based catalyst-free conversion of CH4 and CO2
ScholarlyArticle: "Light-based catalyst-free conversion of CH4 and CO2" (2025) https://www.nature.com/articles/s41566-025-01800-3
NewsArticle: "High-energy photons drive conversion of greenhouse gases into high-value chemicals, no catalyst needed" (2025) https://phys.org/news/2025-12-high-energy-photons-conversion... :
> A team of researchers from China discovered that high-energy photons with a wavelength of 185 nm generated by a specialized 28-W ultraviolet light source could directly break the strong chemical bonds in methane and carbon dioxide. This allowed them to transform the gases into chemicals such as water-gas (CO/H2) and ethane (C2H6) under ambient conditions and even in oxygen-free outer-space-like conditions.
Solutions for producing 185 nm Vacuum UV light?
From Gemini 3:
> The "28-W source" mentioned in that research is almost certainly a Low-Pressure Mercury Amalgam Lamp (which naturally emits at 185 nm and 254 nm) or a Xenon Excimer Lamp (172 nm). These gas-discharge lamps are currently 100x more efficient than any experimental 185 nm LED. [...] To generate a 185 nm photon, an LED needs a semiconductor bandgap of ~6.7 eV. [...] VUV reactor [...] VACNTs inside a glass tube filled with a noble gas (like Argon [126 nm] or Xenon [172 nm])
Sodium borohydride a better Hydrogen storage, transport solution than Ammonia
NewsArticle: "Hydrogen transport is extremely expensive ― Australia finds a solution and produces 1,2 bn pounds in powder" (2025) https://energiesmedia.com/hydrogen-transport-is-extremely-ex... :
> [Australia] John Curtin University researchers engineered a catalyst that easily converts the byproduct into the carrier powder. This all forms part of the university’s Kotai Hydrogen Project. It has been found that adding water to 1 ton of sodium borohydride generates 213kg of hydrogen. Electrolyzers are then used to recharge the byproduct, making sodium borohydride 20 times more affordable than using ammonia, which delivers 178kg of hydrogen per 1 ton.
Rust Coreutils 0.5.0 Release: 87.75% compatibility with GNU Coreutils
With you want 100% compatibility with GNU Coreutils + memory safety just compile Coreutils with Fil-C. 100% compatibility with 0 rewrite.
Two-dimensional magnetic gradients created with direct-write laser annealing
ScholarlyArticle: "Two-dimensional gradients in magnetic properties created with direct-write laser annealing" (2025) https://www.nature.com/articles/s41467-025-65921-7
NewsArticle: "Laser draws made-to-order magnetic landscapes" (2025) https://www.eurekalert.org/news-releases/1109121
Light-bending ferroelectric controls blue and UV could transform chipmaking
ScholarlyArticle: "Anomalous Refractive Index Modulation and Giant Birefringence in 2D Ferrielectric CuInP_2S_6" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adom.20...
Energy-efficient CO2 capture from emissions with pyridinic-graphene membranes
ScholarlyArticle: "Energy- and cost-efficient CO2 capture from dilute emissions by pyridinic-graphene membranes" (2025) https://www.nature.com/articles/s41893-025-01696-5
NewsArticle: "Graphene membranes offer efficient, low-cost option for industrial CO₂ capture" (2025) https://phys.org/news/2025-12-graphene-membranes-efficient-o...
A “frozen” dictionary for Python
PMap and PVector, functional Python libraries,
"PEP 351 – The freeze protocol" (2005, rejected) https://peps.python.org/pep-0351/ ; IIUC the freeze protocol proposed basically:
def freeze(obj)
return cache.setsefault(hash(obj),
obj.__freeze__())
/? "Existing threads re: consts and applications thereof"I wasn't able to find a URL to this post (2021) from the python-ideas mailing list archives using a double quoted search term today; I had to use the python mailing list search engine? Did something break crawling of mailing lists? Old mailman HTML archives were very simple to crawl.. ENH: pypa: add a sitemap.xml for/of mailing list archives, forums; @pypa: ask for search engine indexing advice: "How do we make sure that the python mailing list archives will be search indexed?" (as they traditionally were)
How to find the .txt of mailing list archives posts these days?
From "[Python-ideas] Re: Introduce constant variables in Python" (2021) https://mail.python.org/archives/list/python-ideas@python.or... :
- pyrsistent: PMap, PVector
I'm out of time for this; (reformatting this for HN so that URLs will be auto-linkified but newlines won't be eliminated) here's the full email as .txt, the mailing list archive has a hyperlinkified version with newlines preserved. GH Markdown and CommonMark Markdown also preserve newlines and auto-linkify:
From: [@westurner]
Date: Thu, Jun 17, 2021, 10:43 AM
Subject: Re: [Python-ideas] Re: Introduce constant variables in Python
Cc: python-ideas <python-ideas@python.org>
On Mon, May 24, 2021 at 5:43 PM Chris Angelico <rosuav@gmail.com> wrote:
Requiring that a name not be rebound is well-defined and testable.
Requiring that an object not change is either trivial (in the case of,
say, an integer) or virtually impossible (in the case of most
objects).
What would be the advantage of such a declaration?
ChrisA
## Existing threads re: consts and applications thereof
So, `/? from:me pyrsistent` I found a few results:
- "[Python-Dev] Challenge: Please break this! (a.k.a restricted mode revisited)" 2016-04
https://mail.python.org/pipermail/python-dev/2016-April/143958.html
- ~Sandboxing python within python is nontrivial to impossible; consts might help a bit
- https://mail.python.org/pipermail/python-dev/2016-April/143958.html
- "Proposal to add const-ness type hints to PEP-484"
https://mail.python.org/archives/list/python-ideas@python.org/thread/OVPF5I6IOVF6GOJQRH5UGCCU3R7PQHUF/
- https://github.com/python/typing/issues/242
- "Final names and attributes" https://github.com/python/mypy/pull/5522
This is where `typing.Final` comes from.
- "[Python-ideas] "Immutable Builder" Pattern and Operator"
https://mail.python.org/pipermail/python-ideas/2017-January/044374.html
- [pyrsistent] and "fn.py [do] immutables:
https://github.com/kachayev/fn.py/blob/master/README.rst#persistent-data-structures "
- "[Python-ideas] Add recordlcass to collections module"
https://groups.google.com/g/python-ideas/c/9crHfcCBgYs/m/6_EEaWJAAgAJ
- ORMs (e.g. Django, SQLAlchemy) require "dirty state" checking to know which object attributes have changed and need an SQL statement to be executed to synchronize the state; this is relevant because when we're asking for mutable namedtuple we're often trying to do exactly this pattern.
- "[Python-ideas] Suggestions: dict.flow_update and dict.__add__"
https://www.google.com/search?q=%22%5BPython-ideas%5D+Suggestions%3A+dict.flow_update+and+dict.__add__%22
> dicttoolz has functions for working with these objects; including dicttoolz.merge (which returns a reference to the merged dicts but does not mutate the arguments passed).
>
> https://toolz.readthedocs.io/en/latest/api.html#dicttoolz
> https://toolz.readthedocs.io/en/latest/api.html#toolz.dicttoolz.merge
>
> pyrsistent has a PRecord class with invariants and type checking that precedes dataclasses. pyrsistent also has 'freeze' and 'thaw' functions for immutability. PRecord extends PMap, which implements __add__ as self.update(arg) (which does not mutate self)
https://github.com/tobgu/pyrsistent/blob/master/README.rst#precord
>
> https://github.com/tobgu/pyrsistent/blob/master/pyrsistent/_pmap.py
- "[Python-ideas] How to prevent shared memory from being corrupted ?"
https://www.google.com/search?q=%22How+to+prevent+shared+memory+from+being+corrupted+%3F%22
> PyArrow Plasma object ids, "sealing" makes an object immutable, pyristent
>
> https://arrow.apache.org/docs/python/plasma.html#object-ids
> https://arrow.apache.org/docs/python/plasma.html#creating-an-object-buffer
> > Objects are created in Plasma in two stages. First, they are created, which allocates a buffer for the object. At this point, the client can write to the buffer and construct the object within the allocated buffer. [...]
- [Python-ideas] Experimenting with dict performance, and an immutable dict
https://mail.python.org/archives/list/python-ideas@python.org/message/DNBGUJHDH4UTPSETMFFWMJHNXQXIWX4I/
> https://pyrsistent.readthedocs.io/en/latest/intro.html#pyrsistent :
>
>> Pyrsistent is a number of persistent collections (by some referred to as functional data structures). Persistent in the sense that they are immutable.
>>
>> All methods on a data structure that would normally mutate it instead return a new copy of the structure containing the requested updates. The original structure is left untouched.
>>
>> This will simplify the reasoning about what a program does since no hidden side effects ever can take place to these data structures. You can rest assured that the object you hold a reference to will remain the same throughout its lifetime and need not worry that somewhere five stack levels below you in the darkest corner of your application someone has decided to remove that element that you expected to be there.
>>
>> Pyrsistent is influenced by persistent data structures such as those found in the standard library of Clojure. The data structures are designed to share common elements through path copying. It aims at taking these concepts and make them as pythonic as possible so that they can be easily integrated into any python program without hassle.
> What would be the advantage of such a declaration?
Constants don't need to be locked or unlocked; which is advantageous for parallelism and reasoning about program correctness.
True consts (wherein everything referred to in that object is 'frozen' and immutable or at least only modifiable with e.g. copy-on-write)
wouldn't require locks,
which would be post-GIL advantageous.
You could do consts by never releasing a threading.Lock (or similar):
- https://docs.python.org/3/library/asyncio-sync.html#locks
- https://docs.python.org/3/library/threading.html#lock-objects
- This from
https://docs.python.org/2/library/sets.html?highlight=immutable#immutable-transforms re ImmutableSet/FrozenSet
is not present in the python 3 docs:
https://docs.python.org/3/library/stdtypes.html#set-types-set-frozenset
Though - even if Python enforced normal consts in the language - all of the other code objects would still be mutable, so you still have the impossibility of sandboxing python.
Functional and contracts coding styles rely upon invariance;
which can be accomplished with various third-party packages that enforce const-ness throughout what may be an object tree behind that reference that would otherwise need to be copy.deepcopy()'d.
## pyrsistent
Src: https://github.com/tobgu/pyrsistent
> - PVector, similar to a python list
> - PMap, similar to dict
> - PSet, similar to set
> - PRecord, a PMap on steroids with fixed fields, optional type and invariant checking and much more
> - PClass, a Python class fixed fields, optional type and invariant checking and much more
> - Checked collections, PVector, PMap and PSet with optional type and invariance checks and more
> - PBag, similar to collections.Counter
> - PList, a classic singly linked list
> - PDeque, similar to collections.deque
> - Immutable object type (immutable) built on the named tuple
> - freeze and thaw functions to convert between python standard collections and pyrsistent collections.
> - Flexible transformations of arbitrarily complex structures built from PMaps and PVectors.
## icontract
Src: https://github.com/Parquery/icontract
> icontract provides design-by-contract to Python3 with informative violation messages and inheritance.
>
> It also gives a base for a flourishing of a wider ecosystem:
>
> - A linter pyicontract-lint,
> - A sphinx plug-in sphinx-icontract,
> - A tool icontract-hypothesis for automated testing and ghostwriting test files which infers Hypothesis strategies based on the contracts,
together with IDE integrations such as icontract-hypothesis-vim, icontract-hypothesis-pycharm, and icontract-hypothesis-vscode,
> - Directly integrated into CrossHair, a tool for automatic verification of Python programs,
together with IDE integrations such as crosshair-pycharm and crosshair-vscode, and
> - An integration with FastAPI through fastapi-icontract to enforce contracts on your HTTP API and display them in OpenAPI 3 schema and Swagger UI.
https://en.wikipedia.org/wiki/Design_by_contracthttps://en.wikipedia.org/wiki/Invariant_(mathematics)#Invari... [ https://en.wikipedia.org/wiki/Class_invariant ]
> What is the difference between "invariant" and "constant" and "final"?
Booting Linux in QEMU and Writing PID 1 in Go to Illustrate Kernel as Program
Systemd service unit and systemd-nspawn support could be written in Go, too;
From https://news.ycombinator.com/item?id=41270425 re: "MiniBox, ultra small busybox without uncommon options":
> There's a pypi:SystemdUnitParser.
> docker-systemctl-replacement > systemctl3.py parses and schedules processes defined in systemd unit files: https://github.com/gdraheim/docker-systemctl-replacement/blo...
From a container2wasm issue about linux-wasm the other day: https://github.com/container2wasm/container2wasm/issues/550#... :
> [ uutils/uucore, uutils/coreutils, uutils/procps, uutils/util-linux, findutils, diffutils, toybox (C), rustybox, ]
The FrontierMath benchmark ranks models on math problem performance.
How can other leading math and coding models test your solution?
Write unit tests [with pytest] which assert that the predictive error is within a reasonable tolerance
Notes re: Navier-Stokes from "Google DeepMind team up to solve the Navier-Stokes million-dollar problem": https://news.ycombinator.com/item?id=44383829 :
> Shouldn't solving NS also solve for n-body gravity?
From https://news.ycombinator.com/item?id=46017130 :
>>> "Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 .. https://news.ycombinator.com/item?id=45220460
>> Planetary orbits are an n-body problem. GR, SQG [GR, Gross-Pitaevskii, Navier-Stokes,], and Gravity from QFT solve for planetary orbits
The C++ standard for the F-35 Fighter Jet [video]
PDF: https://www.stroustrup.com/JSF-AV-rules.pdf
Do avionics in general subscribe to MISRA C/C++ or do they go even further with an additional (or different) approach?
Depends on the region. MISRA is widely adopted, and then there are the US MIL standards, ECSS for european aerospace stuff, do-178C for aviation..
/?hnlog awesome-safety-critical
From https://news.ycombinator.com/item?id=45562815 :
> awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/
From "Safe C++ proposal is not being continued" (2025) https://news.ycombinator.com/item?id=45237019 :
> Safe C++ draft: https://safecpp.org/draft.html
Also there are efforts to standardize safe Rust; rust-lang/fls, rustfoundation/safety-critical-rust-consortium
> How does what FLS enables compare to these [unfortunately discontinued] Safe C++ proposals?
Privilege Escalation in Fedora Linux: Exploiting ABRT for Root
g_autofree char *docker_inspect_cmdline = NULL;
if (root_dir != NULL)
docker_inspect_cmdline = g_strdup_printf("chroot %s /bin/sh -c \"docker inspect %s\"", root_dir, container_id);
else
docker_inspect_cmdline = g_strdup_printf("docker inspect %s", container_id);
What static and dynamic analysis tools and rules could have found this vuln?Cage: Hardware-Accelerated Safe WebAssembly
"Cage: Hardware-Accelerated Safe WebAssembly" (2024) https://arxiv.org/abs/2408.11456v2 :
> Abstract: WebAssembly (WASM) is an immensely versatile and increasingly popular compilation target. It executes applications written in several languages (e.g., C/C++) with near-native performance in various domains (e.g., mobile, edge, cloud). Despite WASM's sandboxing feature, which isolates applications from other instances and the host platform, WASM does not inherently provide any memory safety guarantees for applications written in low-level, unsafe languages.
> To this end, we propose Cage, a hardware-accelerated toolchain for WASM that supports unmodified applications compiled to WASM and utilizes diverse Arm hardware features aiming to enrich the memory safety properties of WASM. Precisely, Cage leverages Arm's Memory Tagging Extension (MTE) to (i) provide spatial and temporal memory safety for heap and stack allocations and (ii) improve the performance of WASM's sandboxing mechanism. Cage further employs Arm's Pointer Authentication (PAC) to prevent leaked pointers from being reused by other WASM instances, thus enhancing WASM's security properties.
> We implement our system based on 64-bit WASM. We provide a WASM compiler and runtime with support for Arm's MTE and PAC. On top of that, Cage's LLVM-based compiler toolchain transforms unmodified applications to provide spatial and temporal memory safety for stack and heap allocations and prevent function pointer reuse. Our evaluation on real hardware shows that Cage incurs minimal runtime (<5.8%) and memory (<3.7%) overheads and can improve the performance of WASM's sandboxing mechanism, achieving a speedup of over 5.1%, while offering efficient memory safety guarantees.
Src: https://github.com/TUM-DSE/cage-meta
llvm-memsafe-wasm: https://github.com/TUM-DSE/llvm-memsafe-wasm :
> A LLVM fork to implement MTE-based memory safety for WASM
wasmtime-mte: https://github.com/TUM-DSE/wasmtime-mte :
> A fork of wasmtime to implement MTE-based memory safety for WASM
wasm-tools-mte: https://github.com/TUM-DSE/wasm-tools-mte
wasi-libc: https://github.com/martin-fink/wasi-libc
Operando interlayer expansion of curved graphene for dense supercapacitors
Any exotic materials required? Just carbon?
translation: energy densities comparable to batteries (~50, ~100 Wh/l) and power output characteristic of supercapacitors (~70 kW/l at ~10 Wh/l). sounds useful, fingers crossed they can figure out how to scale this up.
stable performance for > 50000 cycles
ML-KEM Mythbusting
From https://news.ycombinator.com/item?id=45743372 re: the Cloudflare Merkle Tree draft:
> Problem is PQ signatures are large. If certificate chain is small that could be acceptable, but if the chain is large, then it can be expensive in terms of bandwidth and computation during TLS handshake. That is the exchange sends many certificates which embed a signature and a large (PQ) public key.
> Merkle Tree Certificates ensures that an up to date client only needs 1 signature, 1 public key, 1 merkle tree witness.
> Looking at an MTC generated certificate they've replaced the traditional signing algorithm and signature with a witness.
> That means all a client needs is a signed merkle root which comes from an expanding Merkle Tree signed by the MTCA (Merkle Tree CA), which is delivered somehow out of band.
From "Keeping the Internet fast and secure: introducing Merkle Tree Certificates" (2025-10) https://blog.cloudflare.com/bootstrap-mtc/ :
> The central problem is the sheer size of these new algorithms: signatures for ML-DSA-44, one of the most performant PQ algorithms standardized by NIST, are 2,420 bytes long, compared to just 64 bytes for ECDSA-P256, the most popular non-PQ signature in use today; and its public keys are 1,312 bytes long, compared to just 64 bytes for ECDSA. That's a roughly 20-fold increase in size. Worse yet, the average TLS handshake includes a number of public keys and signatures, adding up to 10s of kilobytes of overhead per handshake. This is enough to have a noticeable impact on the performance of TLS.
Are ML-KEM certs impractically large too?
ML-KEM is a key establishment scheme, not a signature scheme.
From Gemini then:
Algorithm Role
Public Key Size Signature / Ciphertext Size
ECDSA P-256 (Identity / Signing)
~64 bytes ~64 bytes
X25519 (Key Exchange)
32 bytes 32 bytes
ML-DSA-44 (PQ; Identity / Signing)
1,312 bytes 2,420 bytes
ML-KEM-768 (PQ; Key Exchange)
1,184 bytes 1,088 bytes
> If you tried to make "ML-KEM Certificates" (using a newer mechanism called AuthKEM where you authenticate by proving you can decrypt a challenge rather than signing), you would replace the ~2.4 KB ML-DSA signature with a ~1 KB ML-KEM ciphertext. This saves about 50% of the bandwidth compared to ML-DSA, but it is still roughly 35x larger than a traditional ECC certificate chain./? AuthKEM:
kemtls/draft-celi-wiggers-tls-authkem: https://github.com/kemtls/draft-celi-wiggers-tls-authkem
"KEM-based Authentication for TLS 1.3" https://kemtls.org/draft-celi-wiggers-tls-authkem/draft-celi... :
> Table 1. Size comparison of public-key cryptography in TLS 1.3 and AuthKEM handshakes.
Handshake HS auth algorithm HS Auth bytes Certificate chain bytes Sum
...
AuthKEM Kyber-768 2272 6152 (Dilithium-2) 8424
AuthKEM Kyber-768 2272 2229 (Falcon-512) 4564
"KEM-based pre-shared-key handshakes for TLS 1.3" > "2.2. Key Encapsulation Mechanisms", "3. Abbreviated AuthKEM with pre-shared public KEM keys": https://kemtls.org/draft-celi-wiggers-tls-authkem/draft-wigg...Is this the thing with ML-KEM, then:
> [With AuthKEM,] you would replace the ~2.4 KB ML-DSA signature with a ~1 KB ML-KEM ciphertext.
What "the thing"? AuthKEM isn't being deployed anywhere.
How much more complex is the difference than 2.4 KB w/ ML-DSA or ~1 KB w/ ML-KEM?
I'm sorry I don't understand what you're asking
Though there is a difference between a cert signature (ML-DSA) and a challenge (ML-KEM), ultimately and fundamentally, isn't real key size still a relevant metric for comparison.
(Everyone dnvoted this like -6/-7. I guess they didn't understand the relevance.)
IDK a terse analogy then:
MerkleCerts + ML-DSA : ML-DSA :: Challenge (ML-KEM,) : ____ (ML-DSA)
Merkle-signing cert trust roots is a security/bytes-transferred efficiency tradeoff.
What is the difference in number of bytes seemed usefully relevant to me at least.
IBM CEO says there is 'no way' spending on AI data centers will pay off
100k TPS over a billion rows: the unreasonable effectiveness of SQLite
That's a helpful TPS Report.
TIL `SAVEPOINT` can occur in a BEGIN ... END SQLite transaction, and that works with optimizing batch size on a particular node with a given load.
Is there a solution for SQLite WAL corruption?
From https://news.ycombinator.com/item?id=45133444 :
> "PSA: SQLite WAL checksums fail silently and may lose data" https://news.ycombinator.com/item?id=44672902
> sqlite-parquet-vtable, [...]
As mentioned in those threads, there is no SQLite WAL corruption if you have a working disk & file system. If you don't, then all bets are off - SQLite doesn't protect you against that, and most other databases won't either. And nested transactions (SAVEPOINT) won't have have any impact on this - all it does in this form is reduce the number of transactions you have.
> working disk & file system
And a working ECC or non-ECC RAM bus, and [...].
How bad is recovery from WAL checksum / journal corruption [in SQLite] [with batching at 100k TPS]?
And should WAL checksums be used for distributed replication "bolted onto" SQLite?
>> (How) Should merkle hashes be added to sqlite for consistency? How would merkle hashes in sqlite differ from WAL checksums?
SQLite would probably still be faster over the network with proper Merkleization
Ghostty compiled to WASM with xterm.js API compatibility
So, could someone now make a Visual Studio Code (and specifically code-server) that has ghostty-web as the Terminal?
Yup, that's the idea!
How to compile the userspace, though?
Have you seen container2wasm or ktock/vscode-container-wasm?
container2wasm: https://github.com/container2wasm/container2wasm
ktock/vscode-container-wasm: https://github.com/ktock/vscode-container-wasm
ktock/vscode-container-wasm-gcc-example: https://github.com/ktock/vscode-container-wasm-gcc-example
From joelseverin/linux-wasm: https://github.com/joelseverin/linux-wasm :
> Hint: Wasm lacks an MMU, meaning that Linux needs to be built in a NOMMU configuration. Wasm programs thus need to be built using -fPIC/-shared. Alternatively, existing Wasm programs can run together with a proxy that does syscalls towards the kernel. In such a case, each thread that wishes to independently execute syscalls should map to a thread in the proxy. The drawback of such an approach is that memory cannot be mapped and shared between processes. However, from a memory protection standpoint, this property could also be beneficial.
A bit OT here, but oh well
Would hardened_malloc or llvm scudo be useful in a WASM runtime (given that WASM doesn't have an MMU)? https://www.google.com/search?q=would+hardened_malloc+be+use...
Emscripten handles malloc and free with dlmalloc, emmalloc, mimalloc,
emcc your_code.c -s 'MALLOC="emmalloc"' -o your_code.html
how to add "hardened_malloc" memory allocator support to emscripten for WASM? https://www.google.com/search?q=how+to+add+%22hardened_mallo..."Import custom memory manager?" · Issue #24851 · emscripten-core/emscripten https://github.com/emscripten-core/emscripten/issues/24851
Just learned about Cage:
Cage does Hardware-Accelerated Safe WebAssembly (WASM) with LLVM with support for ARM64 Memory Tagging Extension (MTE) and Pointer Authentication (PAC) memory safety features.
"Cage: Hardware-Accelerated Safe WebAssembly" (2024) https://arxiv.org/abs/2408.11456v2
Not able to enter text on mobile
From "WebAssembly (WASM) arch support for the Linux kernel" (2025) https://news.ycombinator.com/item?id=45784329 :
> JupyterLite still lacks a Terminal e.g. with BusyBox Ash in WASM, with a file system integrated with the Jupyter-xeus kernel file system.
> This [joelseverin/linux-wasm] appears to load much more quickly than other Linux and I think even just bash in WASM demos I've seen.
Random lasers from peanut kernel doped with birch leaf–derived carbon dots
Would this work on peanuts?
"Near-Field Optical Nanopatterning of Graphene" (2025) https://onlinelibrary.wiley.com/doi/10.1002/smsc.202500184 .. https://news.ycombinator.com/item?id=45623301
Why are they random lasers?
From https://news.ycombinator.com/item?id=45949800 :
> "Cavity electrodynamics of van der Waals heterostructures" (2024) https://arxiv.org/abs/2403.19745 ; graphite / graphene optical cavity
From https://news.ycombinator.com/item?id=44922581 :
> "Grover's algorithm to efficiently prepare quantum states in optical cavity QED" (2025) https://phys.org/news/2025-08-grover-algorithm-efficiently-q...:
>> "Deterministic carving of quantum states with Grover's algorithm" (2025) https://journals.aps.org/pra/abstract/10.1103/s3vs-xz7w
Most lasers have a relatively small rate of gain per unit length so they depend on mirrors. Some lasers like
https://en.wikipedia.org/wiki/Nitrogen_laser
get enough gain that you can don’t need the mirrors —- it’s pretty easy to build one about a foot long that can make nanosecond pulses that are about as long as the laser.
Random lasers uses random particles to extend the optical path instead of mirrors
https://en.wikipedia.org/wiki/Random_laser
I studied condensed matter physics and knew a professor well who was one of Anderson’s grad students so the phenomenon of
https://en.wikipedia.org/wiki/Anderson_localization
which is relevant to random lasers is familiar to me.
Cool field!
Anderson localization ... wavefronts
/?hnlog wavefro
- Huygens-Steiner ; https://news.ycombinator.com/item?id=43673759 , https://news.ycombinator.com/item?id=44401685
(The other Huygens principle is that each point on a wavefront causes another wavefront. How does that also apply to Anderson localization and optical singularities?)
- optical singularities:
"Engineering phase and polarization singularity sheets" (2021) https://www.nature.com/articles/s41467-021-24493-y ... citations: https://scholar.google.com/scholar?cites=6348012568124728820...
- /? optical singularities and Anderson localization
TIL that optical singularities are robust and about optical vortex capture.
- metamaterials
- /? Anderson localization
"Quantum light transport in phase-separated Anderson localization fiber" citations: https://scholar.google.com/scholar?cites=2109673059927233012...
- /?hnlog Fiber
"Selective excitation of a single rare-earth ion in an optical fiber" (2025) https://opg.optica.org/oe/fulltext.cfm?uri=oe-33-19-41011 .. https://news.ycombinator.com/item?id=45620981
- /?hnlog photon
"Telecom-wavelength quantum teleportation using frequency-converted photons" (2025) https://www.nature.com/articles/s41467-025-65912-8
- /?hnlog like black holes
"Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) .. https://news.ycombinator.com/item?id=41643024
- /?hnlog metamaterial
From https://news.ycombinator.com/item?id=45715228 :
> Metamaterials and metasurfaces are probably useful for extreme nonlinear spiking neuromorphic computing with integrated nanophotonics.
> Some optical metamaterials have picosecond phase change latency
I learned that researching how to create a speckle QRNG TRNG.
Phase-change metamaterials are probably faster at whitening photonic speckle random than an FPGA.
"Traceable random numbers from a non-local quantum advantage" (2025) https://www.nature.com/articles/s41586-025-09054-3 .. https://news.ycombinator.com/item?id=45236896 :
> This protocol forms the basis for a public traceable and certifiable quantum randomness beacon that we have launched.
Here's that speckle TRNG design chat: https://gemini.google.com/share/1bb101b39c96 :
> This is the key takeaway: the coherence time of the phonon (mechanical storage) is millions to billions of times longer than the coherence time of the exciton (the optical state), which is typically in the picoseconds. This is precisely why it's so attractive for quantum memory.
Todo
> Resonance: If the plasmon's wavelength fits perfectly into the length of the graphene ribbon (like a guitar string vibrating), you create a strong standing wave. The energy of the THz light is now trapped and massively amplified within this tiny graphene structure.
And also, twisting carbon nanotubes causes a bandgap which may be useful for creating transistors out of carbon.
Nano mechanical energy storage twists SWCNT;
From https://news.ycombinator.com/item?id=45951197 re: "Exploring recent advances in the versatility and efficiency of carbon materials for next generation supercapacitor applications: A comprehensive review" (2025) :
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x
But that's at like 1-10 GHz, not Thz.
(Switching times by material: graphene: ~100s of Ghz; rGO: 1-10 GHz, SWCNT: 1-10 GHz,)
From this design chat about an integrated biorefinery chip fabrication concept, I learned that [Gemini 3 Pro] thinks there's an 80% chance that coating and drying carbon nanotubes in Lignin (or Lignin Vitrimer) would cause a sufficient bandgap in graphene: https://gemini.google.com/share/6796575598b2
> Can graphene be switched at Thz frequencies, to drive optical resonators?
[ You need all-optical switching for Thz frequencies, and also Also there are Graphene Plasmons which can be rescaled]
Which metamaterials are best for Thz all-optical switching? Are there all-carbon options? https://gemini.google.com/share/fe15869a8c9a :
> Graphene metasurfaces; highly oriented pyrolytic graphite (HOPG), Randomly oriented films of SWCNTs
Laser Shockwave Compaction might solve here; but would it destrain the bandgap out of lignin-strained CNT transistors too?
"One‐Step Transformation of Single‐Walled Carbon Nanotube Networks into High‐Performance Multilayer Graphene‐Rich Films via Laser Shockwave Compaction" (2025) https://advanced.onlinelibrary.wiley.com/doi/abs/10.1002/adf... https://westurner.github.io/hnlog/#comment-45951285
...
Are Anderson localization or optical singularities useful for maximizing state coherence time in carbon?
3.5pro: https://gemini.google.com/share/891867c0466b .. https://gemini.google.com/share/dece8f932e69 :
[ Yes, Anderson localization utilizes disorder, and Optical singularities utilize order (topology), and Isotopic Purification creates a "spin vacuum" to minimize magnetic noise ]
> Result: This is how record-breaking coherence times (seconds to minutes) are achieved in NV centers in diamond, far surpassing what Anderson localization alone typically provides
There are newer lower energy processes for fabricating lab grown diamond carbon with NV centers and color centers;
From "Scalable nano positioning of highly coherent color centers in prefab diamond" (2025) https://news.ycombinator.com/item?id=45843416 :
"Rapid, low-temp nanodiamond formation by electron-beaming adamantane C–H bonds" (2025) https://www.science.org/doi/10.1126/science.adw2025 .. https://news.ycombinator.com/item?id=45772158
"Quantum Nanodiamonds from 1 Step, Industrial-Scale Pressure and Temp Process" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adfm.20... .. https://news.ycombinator.com/item?id=45772190
What happens when you Laser Compaction Shock (LCS) nano-mechanically twisted single walled carbon nanotubes; is there a usable (high-Q,) bandgap in twisted SWCNT, and does LCS "lock-in" that bandgap, and is that even necessary if nanomechanical energy storage in twisted single walled carbon nanotubes is lossless?
Constant-time support coming to LLVM: Protecting cryptographic code
/? llvm.ct.select and __builtin_ct_select :
"Constant-Time Coding Support in LLVM: Protecting Cryptographic Code at the Compiler Level" (2025-10) PDF: https://llvm.org/devmtg/2025-10/slides/quick_talks/alexandre... :
> Circumvent Branch-base Timing Attacks
DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning [pdf]
Is everyone just glossing over the first place score of 118/120 on the Putnam?! I mean we'll see how it does on the upcoming 2025 test, but that's insane!
We've seen absolutely ridiculous progress in model capability over the past year (which is also quite terrifying).
For one thing, it's not a real score; they judged the results themselves and Putnam judges are notoriously tough. There was not a single 8 on the problem they claim partial credit for (or any partial credit above a 2) amongst the top 500 humans. https://kskedlaya.org/putnam-archive/putnam2024stats.html.
For another thing, the 2024 Putnam problems are in their RL data.
Also, it's very unclear how these competitions consisting of problems designed to have clear-cut answers and be solved by (well-prepared) humans in an hour will translate to anything else.
What do other models trained on the same problems score? What about if they are RL'd to not reproduce things word for word?
Why do you think that the 2024 Putnam programs that they used to test were in the training data?
/? "Art of Problem Solving" Putnam https://www.google.com/search?q=%22Art+of+Problem+Solving%22...
From p.3 of the PDF:
> Curating Cold Start RL Data: We constructed our initial training data through the following process:
> 1. We crawled problems from Art of Problem Solving (AoPS) contests , prioritizing math olympiads, team selection tests, and post-2010 problems explicitly requiring proofs, total- ing 17,503 problems.
> Why do you think that the 2024 Putnam programs that they used to test were in the training data?
They reference https://artofproblemsolving.com/community/c13_contest_collec... for the source of their scrape and the Putnam problems are on that page under 'Undergraduate Contests'.
> Why do you think that the 2024 Putnam programs that they used to test were in the training data?
Putnam solutions can be found multiple places online: https://kskedlaya.org/putnam-archive/, https://artofproblemsolving.com/community/c3249_putnam. These could have appeared in the training of the base LLM DeepSeek-V3.2-Exp or as problems in the training set - they do not give further detail on what problems they selected from AOPS and as the second link gives they are there.
Show HN: I turned algae into a bio-altimeter and put it on a weather balloon
Hi HN - My name is Andrew, and I'm a high school student.
This is a write-up on StratoSpore, a payload I designed and launched to the stratosphere. The goal was to test if we could estimate physical altitude based on algae fluorescence (using a lightweight ML model trained on the sensor data).
The blog post covers the full engineering mess/process, including:
- The Hardware: Designing PCBs for the AS7263 spectral sensor and Pi Zero 2 W.
-The biological altimeter: How I tried to correlate biological stress (fluorescence) with altitude.
- The Communications: A custom lossy compression algorithm I wrote to smash 1080p images down to 18x10 pixels so I could transmit them over LoRA (915 Mhz) in semi-real-time.
The payload is currently lost in a forest, but the telemetry data survived. The code and hardware designs are open source on GitHub: https://github.com/radeeyate/stratospore
I'm happy to answer technical questions about the payload, software, or anything else you are curious about! Critique also appreciated!
Congrats on the interesting project! I was curious to know more about the scientific payload: how did you measure the fluorescence? Did you apply excitation light continuously? Or did you rely on ambient light and correct for it when measuring fluorescence? Did you have a control on earth to compensate for any biological related effects? UV and even blue light can stress or even kill cells, or bleach the fluorescence proteins. How do you expect altitude to influence fluorescence? It would be great to look at some data (could not find it on the blog, or github). Acrylic blocks a substancial portion of the UV light!
Edit: Definetely agree with other comment that the whole experience is more important than these details.
Thank you for the kind words! The fluorescence was originally meant to be measured with an AS7273 spectrometer (unfortunately bought a different one, still worked fine though), and measuring ~680 nm. Certainly not a great setup but it worked fine. Light was ambient through acrylic, and I found out far too late that UV blocking effects. Despite that, I feel like the data is still somewhat valid, maybe. I did do some testing with it back on earth, though I can't remember how it correlated.
The data I have is here: https://github.com/radeeyate/StratoSpore/blob/main/software/... - just be warned that the altitude data still isn't the exact same as it was while in the air (GPS not working so I had to take it from someone else).
From https://hps.org/publicinformation/ate/q12178/ :
> UV light, a form of energy, is defined as light having wavelengths between 100 nanometers (nm, 1 billionth of a meter in length) and 400 nm. [...]
> Most acrylic plastics will allow light of wavelength greater than 375 nm to pass through the material, but they will not allow UV-C wavelengths (100–290 nm) to pass through.
In terms of photonic permittivity, Glass is better for cold frames and the like, because acrylic filters out UV light.
Also, Hydrogen peroxide (H2O2) is an algaecide.
/? hydrogen peroxide algaecide https://www.google.com/search?q=hydrogen+peroxide+algaecide
OLEDs can now switch light's handedness with an electrical signal
ScholarlyArticle: "Electrical control of photon spin angular momentum in organic electroluminescent materials" (2025) https://www.nature.com/articles/s41566-025-01780-4
Unifying our mobile and desktop domains
Windows ARM64 Internals: Deconstructing Pointer Authentication
"The need for memory safety standards" (2025-02) https://news.ycombinator.com/item?id=43189934 :
> Technologies like ARM's Memory Tagging Extension (MTE) and the Capability Hardware Enhanced RISC Instructions (CHERI) architecture offer a complementary defense, particularly for existing code.
From OP: https://www.preludesecurity.com/blog/windows-arm64-internals... :
> In addition, current-generation ARM64 Microsoft devices, like the Surface Pro, are not shipped with chips that can support the Memory Tagging Extension (MTE) feature. Although not implemented today on Windows systems, the implementation of both PAC and MTE in the future would serve to greatly increase the cost of memory corruption exploits.
"The Arm64 memory tagging extension in Linux" (2020) on LWN: https://news.ycombinator.com/item?id=24824378#24829160
ASan: AddressSanitizer
MSan: MemSan: MemorySanitizer
Google/sanitizers is archived because it was merged into LLVM sanitizers. https://github.com/google/sanitizers/ :
> The Sanitizers project, which includes AddressSanitizer, MemorySanitizer, ThreadSanitizer, LeakSanitizer, and more, is now archived.
LLVM Clang docs > AddressSanitizer: https://clang.llvm.org/docs/AddressSanitizer.html
There's a google/sanitizers wiki page from 2019 about Stack Instrumentation with ARM MTE Memory Tagging Extensions: https://github.com/google/sanitizers/wiki/Stack-instrumentat...
/? MemTagSanitizer https://www.google.com/search?q=MemTagSanitizer
"Color My World: Deterministic Tagging for Memory Safety" (2022) https://arxiv.org/abs/2204.03781 :
> 7.3 Pointer-safe tagging: Recall that safe allocations could still allow inter-object cor- ruption unless it is also pointer-safe (Sections 5.3 and 6.3). To distinguish such safe, but pointer-unsafe allocations, we tag them using the 0b10xx. Consequently, we can at run-time distinguish pointers loaded from pointer-safe allocations, and apply tag forgery prevention to all other loaded pointers.
LLVM Clang docs > MemSanitizer: https://llvm.org/docs/MemTagSanitizer.html :
> Introduction: Note: this page describes a tool under development. Part of this functionality is planned but not implemented. Hardware capable of running MemTagSanitizer does not exist as of Oct 2019.
> MemTagSanitizer is a fast memory error detector and a code hardening tool based on the Armv8.5-A Memory Tagging Extension. It detects a similar class of errors as AddressSanitizer or HardwareAssistedAddressSanitizer, but with much lower overhead.
> MemTagSanitizer overhead is expected to be in low single digits, both CPU and memory. There are plans for a debug mode with slightly higher memory overhead and better diagnostics. The primary use case of MemTagSanitizer is code hardening in production binaries, where it is expected to be a strong mitigation for both stack and heap-based memory bugs.
-fsanitize=memtag
Code sanitizer:
https://en.wikipedia.org/wiki/Code_sanitizer -fsanitizeDoes -fsanitize=memtag already work with RISC-V CHERI?
https://github.com/CHERI-Alliance/llvm-project :
> Codasip LLVM compiler can be checked out from the codasip-cheri-riscv branch
/? "codasip-cheri-riscv" llvm https://www.google.com/search?q=%22codasip-cheri-riscv%22+ll...
codasip-cheri-riscv fork of LLVM: https://github.com/CHERI-Alliance/llvm-project/tree/codasip-...
What is the command to diff this against the commit of LLVM that it was forked from and against the LLVM main branch?
Links to the source for ARM MTE support in the LLVM / LLDB -fsanitize=memtag sanitizer:
lldb/source/Plugins/Process/Utility/MemoryTagManagerAArch64MTE.cpp : https://github.com/llvm/llvm-project/blob/main/lldb/source/P...
lldb/source/Target/MemoryTagMap.cpp: https://github.com/llvm/llvm-project/blob/main/lldb/source/T... , lldb/unittests/Target/MemoryTagMapTest.cpp: https://github.com/llvm/llvm-project/blob/main/lldb/unittest...
lldb/test/API/linux/aarch64/mte_*: https://github.com/llvm/llvm-project/tree/main/lldb/test/API...
clang/test/Driver/aarch64-mte.c: https://github.com/llvm/llvm-project/blob/main/clang/test/Dr...
clang/unittests/Driver/SanitizerArgsTest.cpp looks thin: https://github.com/llvm/llvm-project/blob/main/clang/unittes...
SanitizerArgs.cpp: https://github.com/llvm/llvm-project/blob/main/clang/lib/Dri...
llvm/docs/MemTagSanitizer.rst: https://github.com/llvm/llvm-project/blob/main/llvm/docs/Mem... :
-fsanitize=memtag
-fsanitize-memtag-mode=
-f[no-]sanitize-memory-track-origins[=level]
-march=armv8+memtagLLVM docs > MemTagSanitizer > Heap Tagging: https://llvm.org/docs/MemTagSanitizer.html#heap-tagging :
> Heap Tagging: Note: this part is not implemented as of Oct 2019.
> MemTagSanitizer will use Scudo Hardened Allocator with additional code to update memory tags when
LLVM docs > Scudo Hardened Allocator: https://llvm.org/docs/ScudoHardenedAllocator.html :
> The Scudo Hardened Allocator is a user-mode allocator, originally based on LLVM Sanitizers’ CombinedAllocator. It aims at providing additional mitigation against heap based vulnerabilities, while maintaining good performance. Scudo is currently the default allocator in Fuchsia, and in Android since Android 11
compiler-rt/lib/scudo/standalone: https://github.com/llvm/llvm-project/tree/main/compiler-rt/l...
hardened_malloc is an alternative to scudo.
Telecom-wavelength quantum teleportation using frequency-converted photons
Full title: Telecom-wavelength quantum teleportation using frequency-converted photons from remote quantum dots
From the abstract:
> A global quantum internet is based on scalable networks, which require reliable quantum hardware. Among them are quantum light sources providing deterministic, high-brightness, high-fidelity entangled photons and quantum memories with coherence times exceeding the millisecond range. Long-distance operation demands quantum light sources emitting at telecommunication wavelengths. A cornerstone for such networks is the demonstration of quantum teleportation. Here, we realize full-photonic quantum teleportation employing semiconductor quantum dots, which can fulfill all the aforementioned requirements.
Conjectural connection between Quantum Mechanics and Gravitation
From https://news.ycombinator.com/item?id=45996138 :
>>> "Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 .. https://news.ycombinator.com/item?id=45220460
> Planetary orbits are an n-body problem. GR, SQG, and Gravity from QFT solve for planetary orbits
Are there gravitons if gravity is fully derived from particle forces per QFT?
GPU depreciation could be the next big crisis for hyperscalers
Is there any good reason to not start building graphene and carbon nanotube chips today?
I recall seeing an HN post recently that most graphene products are highly toxic
Silicosis is a hazard of doped SiO2 semiconductor manufacturing.
The search is on for semiconductor photoresist that doesn't contain PFAS.
Supply constraints on suitable grain sand for Silicon wafers, Copper, and Neon limit margins.
Graphene and Carbon nanotube production from CO2 could be done on-site with one advantage being that then unprocessed graphene transport would be minimized.
The US currently imports a lot of graphite for graphene production.
There are all-graphene and graphene-coated aluminum heat sinks.
Filtered water with graphene in it can be used to make higher strength concrete.
FWIU there are maskless processes for semiconductor fabrication.
FWIU, existing EUV nanolithography processes already work on Silicon Carbide. There are now DUV and XUV lasers for photolithography.
Nanoimprinting would probably work with graphene. (CVD graphene production processes already have reels, similar to R2R reel-to-reel production.)
It looks like laser shocking carbon nanotubes congeals them into layers.
There are all-carbon motor windings and high voltage power cables.
Do incident rates in graphene production justify additional controls?
CO2 to nano- air and water filters and CO2 to chips would avoid a lot of chemicals in semiconductor manufacturing.
I just saw $26/ton for (non-CO2) carbon capture in 2025. Gravel is like $10-$50/ton.
Is graphene more hazardous than silicon for semiconductor manufacturing?
How can health and environmental hazards of graphene and carbon nanotube production be minimized or eliminated entirely?
Is there are sustainable binder or a glass that slowly biodegrades that would work with carbon-based chips?
Here's a diagram of a solution for these applications to summarize a chat with Gemini 3 pro thinking; "Carbon Chips: Hurdles and a Future for Hypercomputing: A Blueprint for the Integrated Biorefinery, Lignin-Strained Carbon Logic, and the Post-Silicon 'Green Chip' Economy": https://gemini.google.com/share/5a192ff20a31
Telecom-wavelength quantum teleportation using photons from remote quantum dots
ScholarlyArticle: "Telecom-wavelength quantum teleportation using frequency-converted photons from remote quantum dots" (2025) https://www.nature.com/articles/s41467-025-65912-8 :
> Abstract: [...] Here, we realize full-photonic quantum teleportation employing semiconductor quantum dots, which can fulfill all the aforementioned requirements. Two remote GaAs quantum dots, emitting in the near-infrared, are used: one as an entangled-photon pair source and the other as a single-photon source. During the experiment, the single photon is prepared in conjugate polarization states and interfaced with the biexciton emission of the entangled pair employing a polarization-selective Bell state measurement. This process teleports the respective polarization state onto the exciton emission of the entangled pair. The frequency mismatch between the triggered sources is erased using two polarization-preserving quantum frequency converters, enabling remote two-photon interference at telecommunication wavelengths, yielding a visibility of 30(1)%. A post-selected teleportation fidelity up to 0.721(33), significantly above the classical limit, demonstrates successful quantum teleportation between light from distinct sources. These results mark an important development for semiconductor-based quantum light sources.
NewsArticle; "Physicists Teleport Light Between Tiny Crystals, Pushing Quantum Internet Closer" https://scienceblog.com/physicists-teleport-light-between-ti...
NewsArticle: "Quantum teleportation between photons from two distant light sources achieved" https://phys.org/news/2025-11-quantum-teleportation-photons-... :
> In the Stuttgart experiment, the quantum dots were separated only by an optical fiber of about 10 m length. "But we are working on achieving considerably greater distances," says Strobel.
> In earlier work, the team had shown that the entanglement of the quantum dot photons remains intact even after a 36-kilometer transmission through the city center of Stuttgart. Another aim is to increase the current success rate of teleportation, which currently stands at just over 70%. Fluctuations in the quantum dot still lead to slight differences in the photons.
> "We want to reduce this by advancing semiconductor fabrication techniques," says Strobel.
Show HN: A game where you invest into startups from history
Re: backtesting and paper trading: https://news.ycombinator.com/item?id=38908537 :
> pyfolio.tears.create_interesting_times_tear_sheet
Show HN: Browser-based interactive 3D Three-Body problem simulator
Features include:
- Several preset periodic orbits: the classic Figure-8, plus newly discovered 3D solutions from Li and Liao's recent database of 10,000+ orbits (https://arxiv.org/html/2508.08568v1)
- Full 3D camera controls (rotate/pan/zoom) with body-following mode
- Force and velocity vector visualization
- Timeline scrubbing to explore the full orbital period
The 3D presets are particularly interesting. Try "O₂(1.2)" or "Piano O₆(0.6)" from the Load Presets menu to see configurations where bodies weave in and out of the orbital plane. Most browser simulators I've seen have been 2D.Built with Three.js. Open to suggestions for additional presets or features!
Will this simulate the sun and planets of the solar system?
Do these models of n-body gravity predict the perihelion in the orbit of Mercury?
Newton's does not predict perihelion, GR General Relativity does, Fedi's SQG Superfluid Quantum Gravity with Gross-Pitaevskii does, and this model of gravity fully-derived from the Standard Model also predicts perihelion in the orbit of planet Mercury.
Lagrange points like L1 and L2 are calculated without consideration for the mass of the moon.
Additional notes on n-body mechanics: https://westurner.github.io/hnlog/#comment-45928486 Ctrl-f n-body, perihelion
More notes on CFD, spiceypy, Navier-Stokes, quantum fluids, and SQG: https://news.ycombinator.com/item?id=44383829 :
> this model of gravity fully-derived from [~~the Standard Model~~ QFT] also predicts perihelion in the orbit of planet Mercury.
And also:
>> "Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 .. https://news.ycombinator.com/item?id=45220460
Planetary orbits are an n-body problem. GR, SQG, and Gravity from QFT solve for planetary orbits
One-line tensor visualization for PyTorch and NumPy
Review of Carbon materials for next generation supercapacitor applications
ScholarlyArticle: "Exploring recent advances in the versatility and efficiency of carbon materials for next generation supercapacitor applications: A comprehensive review" (2025) https://www.sciencedirect.com/science/article/abs/pii/S00796...
I found this review article by reviewing citations of this open access paper:
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x .. https://scholar.google.com/scholar?start=10&hl=en&as_sdt=5,4...
> 583 Wh/kg
> Abstract: [...] Here we produced SWCNT ropes wrapped in thermoplastic polyurethane elastomers, and demonstrated experimentally that a twisted rope composed of these SWCNTs possesses the remarkable ability to reversibly store nanomechanical energy. Notably, the gravimetric energy density of these twisted ropes reaches up to 2.1 MJ kg−1, exceeding the energy storage capacity of mechanical steel springs by over four orders of magnitude and surpassing advanced lithium-ion batteries by a factor of three. In contrast to chemical and electrochemical energy carriers, the nanomechanical energy stored in a twisted SWCNT rope is safe even in hostile environments. This energy does not deplete over time and is accessible at temperatures ranging from −60 to +100 °C.
Also from the gscholar "Cited by" list for that paper, I just found:
"One‐Step Transformation of Single‐Walled Carbon Nanotube Networks into High‐Performance Multilayer Graphene‐Rich Films via Laser Shockwave Compaction" (2025) https://advanced.onlinelibrary.wiley.com/doi/abs/10.1002/adf...
Rhombohedral trilayer graphene, Rhombohedral pentalayer graphene, and Twisted bilayer graphene all demonstrate superconductivity. Can laser shockwave compaction transform carbon nanotube networks into superconducting processor components?
What are some possible carbon-based, compostable, inflammable alternatives to polyurethane elastomers for wrapping (multi- or just plain) single wall carbon nanotubes that store energy without loss?
> What are some possible carbon-based, compostable, inflammable alternatives to polyurethane elastomers for wrapping (multi- or just plain) single wall carbon nanotubes that store energy without loss?
I found this article the next day; from https://news.ycombinator.com/item?id=45963395 :
> ScholarlyArticle: "Additive Manufacturing of Molecular Architecture Encoded Stretchable Polyethylene Glycol Hydrogels and Elastomers" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adma.20...
Propylene glycol > Environmental impacts: https://en.wikipedia.org/wiki/Propylene_glycol
Additive Manufacturing of Stretchable Polyethylene Glycol Hydrogels, Elastomers
ScholarlyArticle: "Additive Manufacturing of Molecular Architecture Encoded Stretchable Polyethylene Glycol Hydrogels and Elastomers" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adma.20...
[deleted]
Show HN: WGE – High-Performance WAF Library, 4x Faster Than ModSecurity
From https://news.ycombinator.com/item?id=45755142 :
> Shouldn't eBPF be fast at sorting and running rules?
Re: eBPF and WAFs: https://news.ycombinator.com/item?id=45753629#45755142
> What are good metrics for evaluating WAFs?
CUDA-Q Back Ends: Quantum Hardware (QPU)
2025-11: https://nvidia.github.io/cuda-quantum/latest/using/backends/... :
> Ion Trap QPUs: IonQ, Quantinuum
> Superconducting QPUs: Anyon Technologies/Anyon Computing, IQM, OQC, Quantum Circuits, Inc.
> Neutral Atom QPUs: Infleqtion, Pasqal, QuEra Computing
> Photonic QPUs: ORCA Computing, Quantum Control Systems, Quantum Machines
FWIW, tequilahub/tequila supports a number of QC/QPU APIs as well. Currently supported quantum backends from the tequila README: https://github.com/tequilahub/tequila#quantum-backends
> Qulacs (recommended), Qibo, Qiskit [IBM], Cirq [Google, SymPy], PyQuil QLM (works also whith myQLM), [ and CUDAQ [NVIDIA]]
> Quantum Chemistry backends: Psi4, Madness, PySCF
More abstractly,
Qubit > "Physical implementations" has a table of quantum effects usable for quantum computing: https://en.wikipedia.org/wiki/Qubit#Physical_implementation
An Open-Source HDMI Keyboard/Video/Mouse (KVM) Switch
Audio mixing would be a useful feature; there could a free-spinning volume slider or knob for each audio input, and an option to focus/isolate only the focused AV source
Re: pikvm a "DIY IP-KVM Based on Raspberry Pi", DB9 & RS-232, AMT, DASH: https://news.ycombinator.com/item?id=38062923#38065133
Coherent Synchrotron Radiation by Excitation of SPPs on Near-Critical CNT
ScholarlyArticle: "Coherent synchrotron radiation by excitation of surface plasmon polariton on near-critical solid microtube surface" (2025) https://journals.aps.org/prl/abstract/10.1103/cnym-16hc https://arxiv.org/abs/2507.04561
NewsArticle: "A Radical New Kind of Particle Accelerator Could Transform Science" (2025) https://www.sciencealert.com/a-radical-new-kind-of-particle-...
... CSR Coherent Synchrotron Radiation X-Rays from laser EM on CNT Carbon Nanotubes
How distinct is this from cavity QED?
I found a bunch of tangential articles by trying to find an answer:
From https://news.ycombinator.com/item?id=40876505 :
> 4. Are SPPs distinct from Cherenkov radiation, and photon-electron-phonon vorticity?
"Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2022) https://www.nature.com/articles/d41586-022-03455-4
How did the emissions vary in this experiment given input Terahertz electrovolt input from a graphene circuit?
From https://news.ycombinator.com/item?id=45715175 :
"Cavity electrodynamics of van der Waals heterostructures" (2024) https://arxiv.org/abs/2403.19745
> ; graphite / graphene optical cavity
..
From https://news.ycombinator.com/item?id=44922581 :
"Grover's algorithm to efficiently prepare quantum states in optical cavity QED" (2025) https://phys.org/news/2025-08-grover-algorithm-efficiently-q...
"Deterministic carving of quantum states with Grover's algorithm" (2025) https://journals.aps.org/pra/abstract/10.1103/s3vs-xz7w
...
"Selective excitation of a single rare-earth ion in an optical fiber" (2025) https://opg.optica.org/oe/fulltext.cfm?uri=oe-33-19-41011 .. https://news.ycombinator.com/item?id=45620981
..
From https://news.ycombinator.com/item?id=41442489 :
"Extreme light confinement and control in low-symmetry phonon-polaritonic crystals" like quartz https://arxiv.org/abs/2312.06805 *
Direct tensor processing with coherent light
"Direct tensor processing with coherent light" (2025) https://www.nature.com/articles/s41566-025-01799-7
Near-Perfect Broadband Quantum Memory Enabled by Spin-Wave Compaction
ScholarlyArticle: "Near-Perfect Broadband Quantum Memory Enabled by Intelligent Spin-Wave Compaction" (2025) https://arxiv.org/abs/2505.02424 :
> Abstract: [...] In this Letter, we break through these constraints by unveiling a Hankel transform spatiotemporal mapping for light-spin-wave conversion in quantum memory and proposing an intelligently light-manipulated strategy for spin wave compaction, which maximizes memory efficiency while suppressing excess noise. This strategy is experimentally demonstrated for a Raman quantum memory in warm 87 Rb [Rubidium 87] atomic vapor with an efficiency up to 94.6±1% and a low noise level of only 0.026±0.012 photon per pulse. The unconditional fidelity reaches 98.91±0.1% with an average of 1.0 photon per pulse for a 17 ns input signal. Our results successfully demonstrate a practical benchmark for broadband quantum memory that may facilitate advancements in high-speed quantum networks, quantum state manipulation, and scalable quantum computation.
NewsArticle: "Raman quantum memory demonstrates near-unity performance" (2025) https://phys.org/news/2025-11-raman-quantum-memory-unity.htm...
Ask HN: Architecting audit-grade ESG platforms – AI assistants vs. human CTOs
Background: I'm a solo technical founder building Velumin, a carbon accounting platform for Fortune 500 compliance (CSRD, BRSR, GHG Protocol).
The challenge: ESG platforms need: - Deterministic calculations (auditors reject "AI math") - Immutable audit trails (SOX/SOC2 requirements) - Multi-jurisdictional compliance (EU CSRD, India BRSR, US SEC) - Real-time anomaly detection + AI document generation
*My experiment:* I used Cursor, GitHub Copilot, and Amazon Q (Kiro) to architect the entire stack, guided by a structured "WAR-MODE" prompt covering: 1. Technical architecture (multi-region, event sourcing, circuit breakers) 2. ESG methodology (GHG Protocol validators, uncertainty quantification) 3. Regulatory engines (BRSR/CSRD/SEC automation) 4. Product/UX (role-based onboarding, supplier agent, no-code workflows)
*AI correctly identified:* "Never use LLMs for emission calculations—auditors will reject it" "Implement WORM storage for audit trails, not 'agent memory'" "Multi-model strategy: GPT-4V for OCR, Claude for reports, rules for compliance" "India-first BRSR compliance = competitive moat"
*What I'm unsure about:* - Are there architectural anti-patterns AI tools systematically miss? - For compliance-critical systems, is AI review a complement or substitute for human CTOs? - What's the right balance of AI-generated architecture vs. human validation?
*For experienced CTOs/architects:* What would you want to validate in a system like this that AI likely couldn't catch? And conversely, are there areas where AI review is now legitimately superior to human review (e.g., exhaustive checklist coverage)?
I'm happy to share: - The full WAR-MODE prompt structure (so you can adapt it) - Our architecture decisions and trade-offs - Specific gaps we're worried about
Curious to hear from folks building audit-grade or compliance-heavy systems.
Some forms of carbon are worse than others but carbon mass doesn't account for the difference in impact. Aren't there additional externalities to account for in addition to just carbon?
On whether ESG is worth the time (compared to blindly investing in a universe of stocks that look good on paper relative to other assets only because they're dumping external costs onto everyone without accountability):
"Companies with good ESG scores pollute as much as low-rated rivals" (2023) https://news.ycombinator.com/item?id=36980661
How should carbon accounting account for a process that generates porous graphene filters that capture CO2 carbon out of CO2?
OP here — really appreciate these questions because they get at the real limitations of carbon accounting frameworks.
*1. "Carbon ≠ carbon": different gases, different externalities*
Totally agree. CO₂ mass alone is a simplification. That's why GHG Protocol uses GWP factors to convert different gases into CO₂e: - CH₄: 28–34× CO₂ - N₂O: 265–298× - SF₆/HFCs: 10,000×+
But even GWP misses important dimensions: - Timing effects (short-lived vs. long-lived gases) - Toxicity and pollution - Ozone impacts - Ecosystem and social externalities
So in our system, carbon accounting is just the starting layer. CSRD already forces companies to track water, biodiversity, pollution, and circularity on top of climate (ESRS E2-E5).
*2. Re: ESG ratings not correlating with lower emissions*
Fully agree with the critique. Most ESG scores measure: - Disclosures instead of actual performance - Policies instead of physics - Governance/social weighting that dilutes environmental signals
That's why we avoid "ESG scores" completely. We follow: - Strict GHG Protocol methods - Audit-grade emission-factor calculations - CSRD/BRSR/SEC climate-rule compliance
The 2023 study you cited is exactly why deterministic calculation matters more than ratings.
*3. On porous graphene and carbon-capture edge cases*
This is where things get interesting.
Under GHG Protocol: - Manufacturing the filter → positive emissions (Scope 1/2/3) - Capturing CO₂ → potential removal - But: only counts as removal if storage is permanent (>100 yrs) and third-party verified (e.g., Puro.earth, CDR.fyi) - Temporary use (e.g., carbonation) is not removal—just delayed re-emission
In our accounting model we separate: - Emissions (tCO₂e released) - Avoidance (vs. baseline) - Removals (atmospheric drawdown) - Permanence categories (geological, mineralization, engineered, biomass) - Uncertainty ranges (required under CSRD ESRS E1)
Your graphene example is exactly the type of nuance that standard ESG dashboards usually ignore.
*4. Genuine curiosity*
Do you work in carbon accounting, lifecycle analysis, or climate methodology? Your questions suggest real hands-on experience with the edge cases. We're building Velumin's methodology to handle exactly these scenarios—would love to hear more about your experience if you're open to it.
---
*Side note: Still interested in the original topic* — for compliance-heavy systems, I'm trying to understand where experienced engineers think AI architecture review breaks down vs. where it actually outperforms humans (especially in checklist coverage).
The New 2025 OWASP Top Ten
Also,
"CWE Version 4.18 Now Available" (2025-09) https://cwe.mitre.org/news/archives/news2025.html#september0...
It looks like the OWASP mapping in CWE was last updated in 2021: "CWE VIEW: Weaknesses in OWASP Top Ten (2021)" https://cwe.mitre.org/data/definitions/1344.html
GNU C Library Adds Linux "Mseal" Function for Memory Sealing
From "Introduction of mseal" https://docs.kernel.org/userspace-api/mseal.html#mseal-doesn... :
> mseal doesn’t block
> In a nutshell, mseal blocks certain mm syscall from modifying some of VMA’s attributes, such as protection bits (RWX). Sealed mappings doesn’t mean the memory is immutable.
A universal speed limit for spreading of coherence
"A universal speed limit for spreading of coherence" (2025) https://www.nature.com/articles/s41586-025-09735-z :
> Abstract: Discoveries of fundamental limits for the rates of physical processes, from the speed of light to the Lieb–Robinson bound for information propagation [1,2], often lead to breakthroughs in the understanding of the underlying physics. Here we observe such a limit for a paradigmatic many-body phenomenon, the spreading of coherence during the formation of a weakly interacting Bose–Einstein condensate [...]. We study condensate formation in an isolated homogeneous atomic gas [...] that is initially far from equilibrium, in an incoherent low-energy state, and condenses as it relaxes towards equilibrium. Tuning the interatomic interactions that drive condensation, we show that the spreading of coherence through the system is initially slower for weaker interactions and faster for stronger ones, but always eventually reaches the same limit, at which the square of the coherence length grows at a universal rate given by the ratio of Planck’s constant and the particle mass, or, equivalently, by the quantum of velocity circulation associated with a quantum vortex. These observations are robust to changes in the initial state, the gas density, and the system size. Our results provide benchmarks for theories of universality far from equilibrium [...], are relevant for quantum technologies that rely on large-scale coherence, and invite similar measurements in other systems.
The space changes, so GR and the speed of light are preserved.
"Slow and fast light in plasma using optical wave mixing" (2021) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.12... .. https://scholar.google.com/scholar?cluster=94797501996846831... :
> We show the first experimental demonstration of slow and fast light in a plasma, measuring group velocities between 0.12c and −0.34c .
Also, photons can be FTL (relative to an outside observer) in dielectrics;
> LightSlinger antennae are FTL within the dielectric, but the EMR is not FTL; from https://news.ycombinator.com/item?id=37342016
Also, do these findings apply to this post-Fourier model of thermal spreading limits at material interfaces given phase ? From https://news.ycombinator.com/item?id=45921309 :
> ScholarlyArticle: "Time-domain theory of transient heat conduction in the local limit" (2025) https://journals.aps.org/prb/abstract/10.1103/p8wg-p1j3
> NewsArticle: "From engines to nanochips: Physicists redefine how heat really moves" (2025-10) https://phys.org/news/2025-10-nanochips-physicists-redefine....
But specifically on velocity in superfluids or superfluids aka BEC Bose-Einstein Condensates,
This model predicts the multi body gravitational motions of the planets including the perihelion of Mercury and also vortices in superfluids:
"Physical vacuum as a dilatant fluid yields exact solutions to Pioneer anomaly and Mercury’s perihelion precession" (2019) https://cdnsciencepub.com/doi/10.1139/cjp-2018-0744 .. https://news.ycombinator.com/item?id=45220585
This model derives gravity from the Standard Model of particle physics:
"Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 .. https://news.ycombinator.com/item?id=45220460
And this model describes all in terms of consistency maximization
> "The Self-Consistent Coherence-Maximizing Universe: Complete Derivation of the Standard Model and General Relativity from Mathematical Self-Consistency" (2025) https://www.academia.edu/144466150/The_Self_Consistent_Coher... :
> Abstract: We derive the complete structure of fundamental physics from a single principle: Quantum coherence maximization under self-consistency constraints. [...]
And this model describes all terms in terms of Consistency Functional K; https://news.ycombinator.com/item?id=45909513 :
"The Consistency Functional K: Variational Unification of Quantum Mechanics, Thermodynamics, and General Relativity" (2025) https://doi.org/10.5281/zenodo.17405041
\delta\mathcal{K}(\rho, g) = 0
And Wolfram has a unified theory, tooWhat are the limits to model unification?
A new quantum toolkit for optimization
ScholarlyArticle: "Optimization by decoded quantum interferometry" (2025) https://www.nature.com/articles/s41586-025-09527-5 :
> Abstract: [...] Here we introduce decoded quantum interferometry (DQI), a quantum algorithm that uses the quantum Fourier transform to reduce optimization problems to decoding problems. When approximating optimal polynomial fits over finite fields, DQI achieves a superpolynomial speed-up over known classical algorithms
Time-domain theory of transient heat conduction in the local limit
ScholarlyArticle: "Time-domain theory of transient heat conduction in the local limit" (2025) https://journals.aps.org/prb/abstract/10.1103/p8wg-p1j3
NewsArticle: "From engines to nanochips: Physicists redefine how heat really moves" (2025-10) https://phys.org/news/2025-10-nanochips-physicists-redefine....
New Graphene Tech Powers Supercapacitors to Rival Traditional Batteries
"Operando interlayer expansion of multiscale curved graphene for volumetrically-efficient supercapacitors" (2025) https://www.nature.com/articles/s41467-025-63485-0
Heartbeats in Distributed Systems
Related advice based on my days working at Basho: find a way to recognize, and terminate, slow-running (or erratically-behaving) servers.
A dead server is much better for a distributed system than a misbehaving one. The latter can bring down your entire application.
Docker and Kubernetes have health check mechanisms to help solve for this;
Docker docs > Dockerfile HEALTHCHECK instruction: https://docs.docker.com/reference/dockerfile/#healthcheck
Podman docs > podman-healthcheck-run, docker-healthcheck-run: https://docs.podman.io/en/v5.4.0/markdown/podman-healthcheck...
Kubernetes docs > "Configure Liveness, Readiness and Startup Probes" https://kubernetes.io/docs/tasks/configure-pod-container/con...
Research pinpoints bugs in popular science software (Jupyter)
Did they control for that people don't write tests and test assertions in notebooks?
Did they control for that people tend to not maintain expository code in notebooks in the same way that they maintain modules?
Are most .ipynb demo notebooks?
What percentage of users use a notebook-first workflow; such that code written in an .ipynb is auto-exported to .py modules?
ipytest is one way to run pytest in a notebook. assert statements in named test functions are another way to run tests in a notebook.
How much of the difference in code quality between notebooks and modules is due to the tool and how much is due to the type of code that people tend to write in notebooks?
diVine, a Vine reboot that includes Vine's video archive
From "TikTok has turned culture into a feedback loop of impulse and machine learning" (2025) https://news.ycombinator.com/item?id=45200337 :
> Vine had 6 second short form video in 2012.
> Vine: https://en.wikipedia.org/wiki/Vine_(service)
> Short-form content: https://en.wikipedia.org/wiki/Short-form_content
Did Vine have that impact back in the day?
The Consistency Functional K: Variational Unification of QM, Thermodynamics, GR
"The Consistency Functional K: Variational Unification of Quantum Mechanics, Thermodynamics, and General Relativity" (2025) https://doi.org/10.5281/zenodo.17405041
From https://philarchive.org/archive/SABTCF-3 2025-10-16 draft :
> 9.6. The Unified Table of Correspondence
Physical Domain
Consistency Condition
Emergent Law
Quantum Mechanics
\delta_\rho \mathcal{K} = 0
Unitarity, Born Rule
Thermodynamics
CPTP monotonicity
Second Law, Entropy
Relativity
\delta_\rho \mathcal{K} = 0
Einstein Equations
Gauge Theory
Symmetry Invariance
Yang–Mills Equations
Cosmology
Global extremum of \mathcal{K}
Friedmann Dynamics
IIUC, inferring due to a minor LaTeX table error: Causality
I_{\rho}(A : C \mid B) = 0
Quantum Conditional Mutual Information (QCMI)
> 9.7. The Fundamental Equation of Reality> All of the above can be compactly expressed by the defining relation:
\delta\mathcal{K}(\rho, g) = 0From https://news.ycombinator.com/item?id=45606837 :
> "The Self-Consistent Coherence-Maximizing Universe: Complete Derivation of the Standard Model and General Relativity from Mathematical Self-Consistency" (2025) https://www.academia.edu/144466150/The_Self_Consistent_Coher... :
> Abstract: We derive the complete structure of fundamental physics from a single principle: Quantum coherence maximization under self-consistency constraints. [...]
Skew Pockels effect and metallic electro-optics in gapped bilayer graphene
[deleted]
Pockels effect: https://en.wikipedia.org/wiki/Pockels_effect
ScholarlyArticle: "Skew-scattering Pockels effect and metallic electro-optics in gapped bilayer graphene" [at Terahertz frequencies] (2024) https://arxiv.org/abs/2407.12096
Helion: A high-level DSL for performant and portable ML kernels
Asking as someone who is really out of the loop: how much of ML development these days touches these “lower level” parts of the stack? I’d expect that by now most of the work would be high level, and the infra would be mostly commoditized.
> how much of ML development these days touches these “lower level” parts of the stack? I’d expect that by now most of the work would be high level
Every time the high level architectures of models change, there are new lower level optimizations to be done. Even recent releases like GPT-OSS adds new areas for improvements, like MXFP4, that requires the lower level parts to created and optimized.
How often do hardware optimizations get created for lower level optimization of LLMs and Tensor physics? How reconfigurable are TPUs? Are there any standardized feature flags for TPUs yet?
Is TOPS/Whr a good efficiency metric for TPUs and for LLM model hosting operations?
From https://news.ycombinator.com/item?id=45775181 re: current TPUs in 2025; "AI accelerators" :
> How does Cerebras WSE-3 with 44GB of 'L2' on-chip SRAM compare to Google's TPUs, Tesla's TPUs, NorthPole, Groq LPU, Tenstorrent's, and AMD's NPU designs?
this is like 5 different questions all across the landscape - what exactly do you think answers will do for you?
> How often do hardware optimizations get created for lower level optimization of LLMs and Tensor physics?
LLMs? all the time? "tensor physics" (whatever that is) never
> How reconfigurable are TPUs?
very? as reconfigurable as any other programmable device?
> Are there any standardized feature flags for TPUs yet?
have no idea what a feature flag is in this context nor why they would be standardized (there's only one manufacturer/vendor/supplier of TPUs).
> Is TOPS/Whr a good efficiency metric for TPUs and for LLM model hosting operations?
i don't see why it wouldn't be? you're just asking is (stuff done)/(energy consumed) a good measure of efficiency to which the answer is yes?
> have no idea what a feature flag is in this context nor why they would be standardized (there's only one manufacturer/vendor/supplier of TPUs).
X86, ARM, and RISC have all standardized on feature flags which can be reviewed on Linux with /proc/cpuinfo or with dmidecode.
cat /proc/cpuinfo | grep -E '^processor|Features|^BogoMIPS|^CPU'
There are multiple TPU vendors.
I listed multiple AI accelerator TPU products in the comment you are replying to.> How reconfigurable are TPUs?
TIL Google's TPUs are reconfigurable with OCS Optical Circuit Switches that can be switched between for example 3D torus or twisted torus configurations.
(FWIW also, quantum libraries mostly have Line qubits and Lattice qubits. There is a recent "Layer Coding" paper; to surpass Surface Coding.)
But classical TPUs;
I had already started preparing a response to myself to improve that criteria; And then paraphrasing from 2.5pro:
> Don't rank by TOPS/wHr alone; rank by TOPS/wHr @ [Specific Precision]. Don't rank by Memory Bandwidth alone; rank by Effective Bandwidth @ [Specific Precision].
Hardware Rank criteria for LLM hosting costs:
Criterion 1: EGB (Effective Generative Bandwidth) Memory Bandwidth (GB/s) / Precision (Bytes)
Criterion 2: GE (Generative Efficiency) EGB / Total Board Power (Watts)
Criterion 3: TTFT Potential Raw TOPS @ Prompt Precision
LLM hosting metrics: Tokens Per Second (TPS) for throughput, Time to First Token (TTFT) for latency, and Tokens Per Joule for efficiency.
> There are multiple TPU vendors
There are not - TPU is literally a Google trademark:
> Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google.
https://en.wikipedia.org/wiki/Tensor_Processing_Unit
The rest of what you're talking about is irrelevant
"A Brief Guide of xPU for AI Accelerators" https://www.sigarch.org/a-brief-guide-of-xpu-for-ai-accelera...
NPU: Neural Processing Unit: https://en.wikipedia.org/wiki/Neural_processing_unit
Coprocessor: https://en.wikipedia.org/wiki/Coprocessor
[deleted]
Experimental evidence for nodal superconducting gap in moiré graphene
"Experimental evidence for nodal superconducting gap in moiré graphene" (2025) https://dspace.mit.edu/handle/1721.1/163500 :
> Abstract: [...] Here, we report simultaneous tunneling spectroscopy and transport measurements of magic-angle twisted trilayer graphene. This approach allows us to identify two coexisting V-shaped tunneling gaps with different energy scales: a distinct low-energy superconducting gap that vanishes at the superconducting critical temperature and magnetic field, and a higher-energy pseudogap. The superconducting tunneling spectra display a linear gap-filling behavior with temperature and magnetic field and exhibit the Volovik effect, consistent with a nodal order parameter. Our work suggests an unconventional nature of the superconducting gap and establishes an experimental framework for multidimensional investigation of tunable quantum materials.
Quantum critical electro-optic and piezo-electric nonlinearities in STO
"Quantum critical electro-optic and piezo-electric nonlinearities" [in perovskite Strontium Titanate especially at cryogenic temperatures] (2025) https://www.science.org/doi/10.1126/science.adx8657
Does Wayland fractional scaling work with games in 2025?
> Lastly, I should mention the existence of wp-fractional-scale-v1. Wayland applications that add support for this can allocate a correctly-sized surface to draw on and skip the overscale-then-downscale process. This is similar to much like how KDE treats X11 applications by default and requires applications to properly render the scaling on their own. Since this was only merged relatively recently, a whole generation of Wayland code had already been written without it. I do not expect applications to universally support this any time soon. That said, Chrome has already merged support for this and Firefox support is available behind the flag widget.wayland.fractional-scale.enabled. Perhaps Proton will support this in the future, which would allow games to properly run on the right resolution, and skip all the mess with Xwayland and scaling.
fractional-scale-v1 Wayland Fractional scaling protocol:
"Protocol for requesting fractional surface scales" https://wayland.app/protocols/fractional-scale-v1
Comparison Traits – Understanding Equality and Ordering in Rust
I find floating point NaN != NaN quite annoying. But this is not related to Rust: this affects all programming languages that support floating point. All libraries that want to support ordering for floating point need to handle this special case, that is, all sort algorithms, hash table implementation, etc. Maybe it would cause less issues if NaN doesn't exist, or if NaN == NaN. At least, it would be much easier to understand and more consistent with other types.
I wonder if "any code that would create a NaN would error" would suffice here. I don't think it makes sense when you actually start to implement it, but I do feel like making a NaN error would be helpful. Why would you want to handle an NaN?
If you don't handle NaN values, and there are NaNs in the real observations made for example with real sensors that sometimes return NaN and outliers, then the sort order there is indeterminate regardless of whether NaN==NaN; the identity function collides because there isn't enough entropy for there to be partial ordering or total ordering if multiple records have the same key value of NaN.
How should an algorithm specify that it should sort by insertion order instead of memory address order if the sort key is NaN for multiple records?
That's the default in SQL Relational Algebra IIRC?
> then the sort order there is indeterminate
Well each programming language has a "sort" method that sorts arrays. Should this method throw an exception in case of NaN? I think the NaN rules were the wrong decision. Because of these rules, everywhere there are floating point numbers, the libraries have to have special code for NaN, even if they don't care about NaN. Otherwise there might be ugly bugs, like sorting running into endless loops, data loss, etc. But well, it can't be changed now.
The best description of the decision is probably [1], where Stephen Canon (former member of the IEEE-754 committee if I understand correctly) explains the reasoning.
[1] https://stackoverflow.com/questions/1565164/what-is-the-rati...
Were the procedures for handling Null and Null pointers well defined even for C in 1985 when IEEE-754 was standardized?
There's probably no good way to standardize how to fill when values are null or nan. How else could this be solved without adding special cases for NaN?
In a language with type annotations we indicate whether a type is Optional:
def sum(a: float|None, b: Optional[float]) -> None|float :
def sum(a: float|np.nan|None, b: Optional[float|np.nan]) -> None|float|np.nan :Elon Musk says building his own 'TeraFab' chip fab may be the only answer
> “Building advanced chip manufacturing is extremely hard,” Jensen Huang, chief executive of Nvidia, said at a TSMC event on Thursday.
Perhaps. But if TSMC, Samsung, Intel, GlobalFoundries, SK Hynix, SMIC, and UMC can all do it, it isn't THAT esoteric.
Lol. I assume you’re being facetious. But those companies have all been at it for decades.
Are any of them making compostable sustainable chips out of graphene or carbon nanotubes yet though?
They all compete for Silicon (SiO2) and P and B and Copper (Cu) and Neon (Ne), and PFAS for photoresist masks.
Graphene can be made from CO2 and unsorted plastics, though graphene is typically manufactured from imported graphite FWIU.
Traditional nanolithography works on silicon carbide.
FET nano transistors can be patterned into graphene and other forms of carbon.
Graphene oxide and Carbon epoxide are probably better substrates than doped Silicon.
The work functions of graphene oxide and carbon nanotubes are different enough for reduced graphene oxide to be the substrate for carbon-based integrated electronic, phononic, and photonic computing chips.
Alternate semiconductor materials (Graphene, SiC) could circumvent some of the expensive steps required for Si, but not all. Here's a good article about the unimaginably high purity standards for the water used in the industry:
https://www.asianometry.com/p/the-purest-water-in-the-world
The average fab uses about 2,000 gallons of ultrapure water each minute, 2-3 million gallons each day.
Pipes and tubing are constantly shedding particles into flowing water - with random bursts that drive everyone crazy.
Once the killer particle size limit ratcheted down to 20 nanometers - a limit we hit roughly about ten years ago - engineers realized that there existed no detection tool for consistently detecting sub-10 nanometer particles in low quantities.
It is also possible to make water filters and air filters out of graphene layers with one or more pore sizes.
The pore size of graphene can be varied parametrically.
Water filters and air filters and superconducting electronic computers can be made by stacking layers of graphene.
Some products made out of graphene are compostable. Other forms of carbon are considered soil amendments.
Does or would passive distillation allow some of the waste sediment to settle?
Graphene can be added to concrete to make it stronger.
Could the graphene in water from next generation graphene-based semiconductor and superconductor production be used as a concrete additive?
Scalable nano positioning of highly coherent color centers in prefab diamond
> Prefabricated diamond nanostructures
"Rapid, low-temp nanodiamond formation by electron-beaming adamantane C–H bonds" (2025) https://www.science.org/doi/10.1126/science.adw2025 .. https://news.ycombinator.com/item?id=45772158
296K = 73.13°F = 22.85°C
100K = -279.67°F = -173.1°C
"Quantum Nanodiamonds from 1 Step, Industrial-Scale Pressure and Temp Process" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adfm.20... ..
https://news.ycombinator.com/item?id=45772190ScholarlyArticle:
"Scalable nanoscale positioning of highly coherent color centers in prefabricated diamond nanostructures" (2025) https://www.nature.com/articles/s41467-025-64758-4
Async QUIC and HTTP/3 made easy: Tokio-quiche is now open-source
Does it work to call tokio-quiche from Python with pyo3-asyncio? https://github.com/awestlake87/pyo3-asyncio
cloudflare/quiche/h3i: https://github.com/cloudflare/quiche/tree/master/h3i :
> h3i consists of an interactive command-line tool and library for low-level HTTP/3 debugging and testing
Wafer-Scale AI Compute: A System Software Perspective
From https://www.sigops.org/2025/wafer-scale-ai-compute-a-system-... :
> When designing efficient software for wafer-scale compute, PLMR can serve as a checklist: performance-critical AI kernel and parallelism strategy should be PLMR-compliant. Importantly, PLMR is not limited to wafer-scale chips; it reflects a broader architectural shift from unified memory to large-scale NUMA designs. Unified memory, typically implemented with crossbars or all-to-all interconnects, scales poorly because networking cost grows exponentially with the number of cores and memory units. By contrast, emerging interconnects such as 2D mesh, ND mesh, 2D torus, and 3D torus scale with linear networking cost, but shift the complexity of maintaining efficient parallel computation onto software.
Plasma lens for focusing XUV and X-ray attosecond pulses
"Plasma lens for focusing attosecond pulses" (2025) https://www.nature.com/articles/s41566-025-01794-y
Munich's surfers left stunned after famed river wave vanishes
I visited Munich back in 2013 and recorded several surfers on the wave [0]. For reference I was standing on the bridge just above the platform in the article's second photo. It was pretty neat, and I'm sad that it might be lost.
I’m sure they will repair it in no time. It’s too much of a tourist attraction to just let it be.
The article mentions they want to bring it back, they just don’t know how they lost it as no structural changes were made.
I think it’s an opportunity to make structural changes and shape that peak like the German Engineers we all know. It will be back better than ever.
It will be fine.
Schauberger Instream River Training
Instream River Training: https://de.wikipedia.org/wiki/Instream_River_Training
River engineering: https://en.wikipedia.org/wiki/River_engineering
From an email for a company ( https://desertcontrol.com ) that specializes in reducing irrigation needs and fertilizing especially sandy soil with silt and LNC Liquid Natural Clay :
> "Schauberger's Legacy: The Water Technology Revolution Powered by Vortex Force" https://youtube.com/watch?v=N_58gtKlfsI
- [Instream River Training], - Microgroins, - Control the river from the middle of it, not with the banks, - Hyperbolic funnels aerate, - Vacuum kills bacteria, - Chemical free water treatment, - Oxygenating or aerating water makes it more fertilizing
Scalable synthesis of CO2-selective porous single-layer graphene membranes
ScholarlyArticle: "Scalable synthesis of CO2-selective porous single-layer graphene membranes" (2025) https://www.nature.com/articles/s44286-025-00203-z
Video‐rate tunable colour electronic paper with human resolution
ScholarlyArticle: "Video‐rate tunable colour electronic paper with human resolution" (2025) https://www.nature.com/articles/s41586-025-09642-3
NewsArticle: "Colour e-paper screen offers high-res video with low energy use" (2025) https://www.newscientist.com/article/2500981-colour-e-paper-...
Guideline has been acquired by Gusto
Notes about 401K backtesting and funds,: https://news.ycombinator.com/item?id=42387927
Diffwatch – Watch AI agents touch the FS and see diffs live
From https://news.ycombinator.com/item?id=45516584#45517613 re: LTM and STM and LLMs:
> jj autocommits when the working copy changes, and you can manually stage against @-: https://news.ycombinator.com/item?id=44644820
lazyjj is a TUI for jj: https://github.com/Cretezy/lazyjj
Would a live log follow mode for lazyjj solve?
diffwatch is kinda general purpoure, besides the agent work you could watch different processes doing stuff in your homedir, for example
Cool tool! Is the inotify directory/file watch count the limit?
I can't seem to remember the name of the pre-containers tool that creates a virtual build root and traps all the file syscalls. It's not strace.
Easier to trace everything an AI runs by running the agent in a container with limited access to specific filesystem volumes.
eBPF is the fastest way to instrument in Linux AFAIU:
Traceleft: https://github.com/ShiftLeftSecurity/traceleft
Tracee: https://github.com/aquasecurity/tracee
Falco docs > Supported events: https://falco.org/docs/reference/rules/supported-events/
Tetragon: https://github.com/cilium/tetragon
strace could have a --diff-fs-syscall-files option:
strace -p PID -f -F -e trace=file -s 65536it uses the os independant fsnotify lib, it surely has its limits. eBPF is great, but linux only, yeah
On MacOS:
sudo dtrace -n 'vfs::*:entry { printf("%-16s %-6d %s", execname, pid, probefunc); }'
sudo dtrace -n 'vfs:lookup:entry { printf("%-16s %-6d %s", execname, pid, copyinstr(arg1)); }'
TIL Dtrace is included in recent builds of Windows 11 and Server 2025: https://learn.microsoft.com/en-us/windows-hardware/drivers/d... ; # Must be run as Administrator
dtrace -n "syscall::NtCreateFile:entry, syscall::NtReadFile:entry, syscall::NtWriteFile:entry { printf(\"%s (%d) - %s\", execname, pid, probefunc); }"
It's possible to trace file system calls in Windows with procmon.exe by saving a .pmc config file and then loading it from the CLI: procmon.exe
# uncheck everything except "Show File System Activity"
# Filter > Drop Filtered Events
# File > Export Configuration...
# Must be run as Administrator
procmon.exe /AcceptEula /Quiet /Minimized /LoadConfig C:\Tools\fs-only.pmc /BackingFile C:\Logs\FileSystemTrace.pml
It's also possible to trace lower level file system calls in Windows with logman.exe but it's necessary to parse the traces that it generates.Then with just bpftrace on Linux:
sudo bpftrace -e 'tracepoint:syscalls:sys_enter_openat { printf("%-6d %-16s %s\n", pid, comm, str(args.filename)); }'
sudo bpftrace -e 'kprobe:vfs_read, kprobe:vfs_write, kprobe:vfs_open { printf("%-16s %-6d %s\n", comm, pid, probefunc); }'
... According to 2.5pro on the cli strsstrace, dtrace, and bpftrace could have a --diff-fs-syscall-files option.
great insights, i'll read up on it and see if it can be useful, thx
np. there's a diagram, "Linux bcc/BPF tracing tools" [-1] in the bcc readme [0] that's also in [1] which explains ebpf and bcc and bpftrace.
filetop, dirtop, and vfsstat use bpf to trace the VFS layer. [4]
[-1] "Linux bcc/BPF tracing tools" https://www.brendangregg.com/BPF/bcc_tracing_tools_early2019...
[0] iovisor/bcc: https://github.com/iovisor/bcc
[1] "Linux Extended BPF (eBPF) Tracing Tools", Dtrace book: https://www.brendangregg.com/ebpf.html
If running an AI agent in a container --- with devcontainers and e.g. vscode,
Good container policy prevents granting a container the CAP_SYS_ADMIN capability; the least-privileges thing to do is to grant limited capabilities to the container like CAP_BPF and (CAP_PERFMON, CAP_NET_RAW, CAP_SYS_PTRACE) [,3].
[3] https://medium.com/@techdevguides/using-bpftrace-with-limite...
[4] bpfcc-tools manpages: https://manpages.debian.org/unstable/bpfcc-tools/index.html
though ripgrep wins, vscode fails at monitoring large workspaces due to inotify limits too; so some way to parse fs events from bcc and libdtrace with python would be great
prompt 1: Create a python project named idk dbpftrace with a pyproject.toml and a README and sphinx /docs, with bcc and python-dtrace as dependencies to, then in dbpftrace/,
parse pid and descendents' fs syscall events from bcc (ebpf) or python-dtrace (dtrace), depending on which os we're running
Edit:
Prompt 1B: Create a Go package named dbpftrace with a README and docs,
parse pid and descendents' fs syscall events from bpftrace or dtrace stdout, depending on which os we're running
Prompt 1C: Create a Go package named dbpftrace with a README and docs, then create a cli utility named dbpftrace to:
parse pid and descendents' fs syscall events (like bpftrace) using libbpfgo and godtrace
Use either (cilium/ebpf or libbpfgo or gobpf) or (godtrace or (CGO or FFI) bindings to libdtrace) depending on which OS, by default
cilium/ebpf: https://github.com/cilium/ebpf
aquasecurity/libbpfgo https://github.com/aquasecurity/libbpfgo
iovisor/gobpf w/ bcc: https://github.com/iovisor/gobpf
chzyer/godtrace: https://github.com/chzyer/godtrace
oracle/dtrace-utils/tree/devel/libdtrace: https://github.com/oracle/dtrace-utils/tree/devel/libdtrace
From https://news.ycombinator.com/item?id=45755142 re eBPF for WAF:
> awesome-ebpf > Kernel docs, examples, Go libraries: https://github.com/zoidyzoidzoid/awesome-ebpf#go-libraries :
>> Go libraries:
>> cilium/ebpf - Pure-Go library to read, modify and load eBPF programs and attach them to various hooks in the Linux kernel.
>> libbpfgo - eBPF library for Go, powered by libbpf.
>> gobpf - Go bindings for BCC for creating eBPF programs
Thanks for the thoughtful pointers — super helpful.
Where diffwatch is today: it’s a portable directory watcher (fsnotify → inotify/FSEvents/ReadDirectoryChangesW) that coalesces events and renders live unified diffs in a tiny TUI.
What I’m planning based on your suggestions (and others here):
1. Two-tier design
Default (no admin): keep the current directory-watch mode for quick, portable use.
Power mode (attach): diffwatch attach --pid <PID> | --cmd "<…>" to trace a process and its children and feed any touched paths into the same diff UI.
2. Per-OS backends for “attach”
Linux: eBPF/bpftrace when available; fallback to strace -ff -e trace=file for zero extra deps.
macOS: opensnoop / fs_usage (DTrace-based).
Windows: ETW (Kernel File provider) via a tiny helper (e.g., KrabsETW) that streams JSON events.
3. Admin rights caveat
macOS (DTrace) and Windows (ETW kernel) typically require admin. I’ll keep the default dir-watch mode as the “no-admin” path, and document the elevated-rights requirement clearly for “attach”.
4. Normalized event stream
All backends emit a common JSON line: {"ts": "...", "pid": 1234, "op": "create|write|rename|unlink|close", "path": "..."} Then a short stability window (debounce + retry on transient ENOENT) before reading to diff.
5. Scalability & ergonomics
Handle editor/atomic-save tempfiles gracefully.
Respect .gitignore and add --exclude/--include globs.
Guardrails for watch count limits; skip non-regular files; optional --record (NDJSON) and --save-patch.
6. Containers / agents
Nice follow-on: diffwatch attach --cmd ... inside a container (or attach by PID in the container namespace) to confine the blast radius for agent runs.
Ask: I’d love help and pointers to minimal tracer scripts:
A small bpftrace/DTrace snippet that reliably captures opens/writes/renames for a PID(+children).
A tiny Windows ETW consumer example focused on File I/O, filtered by PID, emitting JSON.
Repo: https://github.com/deemkeen/diffwatch I’ll open issues for:
“Attach mode” backends (Linux/macOS/Windows)
.gitignore/globs
Event coalescing + transient ENOENT handling
JSON recording / patch export
If you or anyone wants to collaborate, I’ll tag them good first issue / help wanted and am happy to review PRs quickly. Thanks again for the nudge to go beyond plain FS events — the PID/container “attach” mode should make agent debugging much more robust.
Np. Distributed tracing tools for containers already do this but none have a --diff feature for logging what changed in changed files.
Does this command also track renames?
sudo dtrace -n 'vfs::*:entry { printf("%-16s %-6d %s", execname, pid, probefunc); }'
Isn't it just a list of syscalls instead of vfs:*?Actually, re: Dtrace on MacOS with SIP and apparently without sufficient symbols installed to trace kernel syscalls these days: https://news.ycombinator.com/item?id=38909715
It looks like there's a utility called dtruss which wraps Dtrace on OSX: https://www.google.com/search?q=dtruss
"Misadventures in DTrace: how to debug the macOS kernel" (2025) https://jade.fyi/blog/misadventures-in-dtrace/ :
> My advice, and what I actually did, is to put macOS in a UTM.app VM with nothing of value in it, disable SIP in the VM, and do all further testing in there.
> Once inside a VM with SIP disabled (or with dtrace enabled as a fine-grained policy), DTrace works. dtruss gives some output like the following:
FWIU it is possible to trace Linux containers on Mac OS with e.g. cilium, only if the Linux containers are hosted in a Linux VM.
Bye, Google Search
Pagefind: https://pagefind.app/ .. https://github.com/pagefind/pagefind :
> The goal of Pagefind is that websites with tens of thousands of pages should be searchable by someone in their browser, while consuming as little bandwidth as possible. Pagefind’s search index is split into chunks, so that searching in the browser only ever needs to load a small subset of the search index. Pagefind can run a full-text search on a 10,000 page site with a total network payload under 300kB, including the Pagefind library itself.
From https://news.ycombinator.com/item?id=31321024 :
> Sphinx searchindex.js does Porter stemming for English and other languages: [ sphinx.search: https://github.com/sphinx-doc/sphinx/tree/master/sphinx/sear... ]
atsphinx/pagefind adds pagefind search to sphinx docs : https://github.com/atsphinx/pagefind
Hard Rust requirements from May onward
From the mailing list on this: https://lists.debian.org/debian-devel/2025/10/msg00288.html :
> Be careful. Rust does not support some platforms well.[0] ANything
> that is not Tier 1 is not guaranteed to actually work. And
> architectures like m68k and powerpc are Tier 3.
>
> [0] <https://doc.rust-lang.org/beta/rustc/platform-support.html>.
[ The rustc book > Platform Support: https://doc.rust-lang.org/beta/rustc/platform-support.html ][ The rustic book > Target Tier Policy: https://doc.rust-lang.org/beta/rustc/target-tier-policy.html... ]
Thank you for your message.
Rust is already a hard requirement on all Debian release
architectures and ports except for alpha, hppa, m68k, and
sh4 (which do not provide sqv).
Create a plan to add support for {alpha, hppa, m68k, and
sh4,} targets to the Rust compiler- 2.5pro: "Rust Compiler Target Porting Plan" https://gemini.google.com/share/b36065507d9d :
> [ rustc_codegen_gcc, libcore atomics for each target (m68k does not have support for 64-bit atomics and will need patching to libgcc helper functions), ..., libc, liballoc and libstd (fix std::thread, std::fs, std::net, std::sync), and then compiletest will find thousands of bugs ]
So, CI build hours on those actual but first emulated ISAs?
"Google porting all internal workloads to ARM, with help from GenAI" (2025) https://news.ycombinator.com/item?id=45691519
"AI-Driven Software Porting to RISC-V" (2025) https://news.ycombinator.com/item?id=45315314
"The Unreasonable Effectiveness of Fuzzing for Porting Programs" (2025) https://news.ycombinator.com/item?id=44311241 :
> A simple strategy of having LLMs write fuzz tests and build up a port in topological order seems effective at automating porting from C to Rust.
WebAssembly (WASM) arch support for the Linux kernel
Demos at: https://joelseverin.github.io/linux-wasm/
How does this compare to the c2w container2wasm approach?
container2wasm/container2wasm: https://github.com/container2wasm/container2wasm :
> container2wasm is a container-to-wasm image converter that enables to run the container on WASM.
> Converts a container to WASM with emulation by Bochs (for x86_64 containers), TinyEMU (for riscv64 containers) and QEMU.
> Runs on WASI runtimes (e.g. wasmtime, wamr, wasmer, wasmedge, wazero)
> Runs on browser
> x86_64, riscv64 or AArch64 containers are recommended.
/? container2wasm: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
ktock/vscode-container-wasm https://github.com/ktock/vscode-container-wasm :
> Containers on VSCode for the Web [ https://vscode.dev ]
ktock/vscode-container-wasm-gcc-example: https://github.com/ktock/vscode-container-wasm-gcc-example
JupyterLite works without install on Chromebooks.
JupyterLite still lacks a Terminal e.g. with BusyBox Ash in WASM, with a file system integrated with the Jupyter-xeus kernel file system.
This appears to load much more quickly than other Linux and I think even just bash in WASM demos I've seen.
That requires an ISA emulation layer, this new implementation doesn't - here, every binary is compiled as wasm, and every child process runs as a new Wasm WebWorker, and the Kernel ABI is exposed as Wasm export functions.
Removing the ISA translation layer has the potential to be massively faster for full-system environments. At the expense of maybe some new bugs.
The performance should ultimately be similar to compiling your userspace application directly as Wasm, but you now get to take advantage of the full kernel ABI instead of just the minimal shims that Emscripten give you / whatever DOM glue you create yourself.
One less layer of translation!
Shouldn't browser tabs and/or origins get their own SELinux contexts like all Android apps since Android 4.4, like container-selinux and openshift's k8s? https://news.ycombinator.com/item?id=45418918#45421242
uutils/coreutils, findutils, diffutils, and Toybox are written in Rust which IIRC has a cleaner compile to WASM: https://news.ycombinator.com/item?id=45495100
RustPython may for may not also have a faster loading time than CPython compiled to WASM, though there are already some patches to CPython for WASM.
Where are the tests for the post-patch bugs this finds? Are they're expected behaviors that are not yet in tests which specify?
GHC now runs in the browser
Interesting technical achievement but what would this be used for in practical terms?
Have you ever used Godbolt? The Rust playground? The Typescript's playground? The Go playground?
It lets you have that without the pain of hosting compilers server side.
From "WebR – R in the Browser" (2025 https://news.ycombinator.com/item?id=44999706 :
> jupyterlite-xeus builds jupyterlite, Jupyter xeus kernels, and the specified dependencies to WASM with packages from conda-forge or emscripten-forge.
jupyterlite/xeus https://github.com/jupyterlite/xeus
There may be an easy way to wrap GHC with jupyterlite/xeus, with Haskell's lazy evaluation; xeus-haskell or xeus-ghc?
Linux Kernel Ported to WebAssembly – Demo Lets You Run It in Your Web Browser
- "WebAssembly (WASM) arch support for the Linux kernel" (joelseverin/linux-wasm) https://news.ycombinator.com/item?id=45783074#45784329 2hrs ago:
> How does this compare to the c2w container2wasm approach?
Agentic AI Home Energy Management System: Residential Load Scheduling
Is there yet a multi-source heat pump that knows the costs of each source?
That doesn't require an LLM.
Few (if any?) residential energy markets in the United States have intraday pricing. The EU requires intraday electricity pricing for membership FWIU.
Also it's not uncommon for the price of electricity to go below zero in markets with heavy subsidization to accelerate progress toward clean energy.
From "German power prices turn negative amid expansion in renewables" (2025) https://news.ycombinator.com/item?id=42603130 :
> Given the intraday prices, are there sufficient incentives to stimulate creation of energy storage businesses to sell the excess electricity back a couple hours or days later?
Scientists create squishy robotic 'eye' that focuses automatically
ScholarlyArticle: "Bioinspired photoresponsive soft robotic lens" (2025) https://www.science.org/doi/10.1126/scirobotics.adw8905
NewsArticle: "Light-Powered Soft Lens from Georgia Tech Researchers Brings Human-Like Vision to Robotics" (2025) https://bme.gatech.edu/news/light-powered-soft-lens-georgia-... :
> The photoresponsive hydrogel soft lens (PHySL) is constructed entirely from soft, bio-safe materials, making it ideal for applications where rigid optics are impractical—such as soft robots and medical devices that interact safely with tissues. At the core of the design is a thermally responsive hydrogel—a water-absorbing polymer commonly found in products like contact lenses—infused with graphene, which converts light into heat, triggering shape changes that act as artificial muscles. This property allows the lens to be controlled remotely without needing battery power or wired connections.
Notes re: ocular regenerative medicine: https://news.ycombinator.com/item?id=43209684 :
> Accomodating IOLs that resist UV damage better than natural tissue: Ocumetics
Ocumetics is developing an accomodating IOL: https://ocumetics.com/#bionic :
> Ocumetics Lens technologies have been designed to create an accommodating intraocular lens, which fits within the lens capsule and eliminates the need for corrective lenses, using the natural kinetics in the eye ciliary muscles and zonules. The proprietary design has been created to be used as a replaceable device that serves secondarily as a docking station for customized optics and evolving nanotechnologies.
> Its basic framework consists of two components that are designed to engage and interact within the confines of the eye’s natural lens space to establish a dynamic and particularly sensitive connection between eye muscle action and curvature change. This connection can be adapted for virtually any eye, regardless of the lens prescription.
Blazeio vs. FastAPI vs. Robyn: Benchmarking Reveals 86x Performance Difference
Blazeio is an ultra-fast asynchronous real-time streaming web framework crafted for high-performance backend applications. Built on Python's asyncio, it delivers non-blocking operations, minimal overhead, and lightning-quick request handling.
The Benchmark Setup
Hardware & Testing Environment
· Platform: Google Colab v5e-1 TPU Instance · Testing Tool: wrk with 1 thread, 10-second duration · Connection Loads: 1,000, 3,000, 5,000, and 10,000 concurrent connections · Payload: Identical "Hello world" response with full security headers (HSTS, CSP, X-Frame-Options, etc.)
Framework Configurations
All three frameworks were tested with identical:
· Security headers and policies · Keep-alive connections enabled · Same response payload · Identical testing methodology
The Results: Complete Performance Annihilation
Throughput Massacre Across All Load Levels
Requests Per Second Comparison:
Connections Blazeio Robyn FastAPI Blazeio Advantage 1,000 79,388 RPS 8,685 RPS 4,151 RPS 19.1x vs FastAPI 3,000 57,519 RPS 8,014 RPS 4,387 RPS 13.1x vs FastAPI 5,000 44,782 RPS 7,519 RPS 3,411 RPS 13.1x vs FastAPI 10,000 39,157 RPS 7,186 RPS 3,086 RPS 12.7x vs FastAPI
Transfer Rate Comparison:
Connections Blazeio Robyn FastAPI Blazeio Advantage 1,000 50.88 MB/s 1.05 MB/s 0.59 MB/s 86.2x vs FastAPI 3,000 36.86 MB/s 0.97 MB/s 0.62 MB/s 59.5x vs FastAPI 5,000 28.70 MB/s 0.91 MB/s 0.48 MB/s 59.8x vs FastAPI 10,000 25.09 MB/s 0.87 MB/s 0.43 MB/s 58.4x vs FastAPI
Blazeio's worst-case scenario outperforms everyone else's best-case scenario: · Blazeio at 10,000 connections (39,157 RPS) vs FastAPI at 1,000 connections (4,151 RPS): 9.4x faster · Blazeio at 10,000 connections vs Robyn at 1,000 connections: 4.5x faster
Latency Scaling: The Architectural Divide
Average Latency Under Load:
Connections Blazeio Robyn FastAPI 1,000 29.72ms 113.66ms 135.89ms 3,000 33.54ms 362.94ms 345.99ms 5,000 57.07ms 631.26ms 879.33ms 10,000 93.83ms 1.23s 1.41s
Blazeio's latency increased only 3.2x from 1K to 10K connections, while others increased 10-15x!
Total Request Capacity
Requests Served in 10 Seconds:
Connections Blazeio Robyn FastAPI 1,000 797,765 87,438 41,769 3,000 575,932 80,393 43,896 5,000 449,687 75,821 34,362 10,000 393,401 72,300 31,171
Blazeio served more requests at 10,000 connections than FastAPI served at 1,000 connections.
The Architecture Behind the Numbers
Why Blazeio Achieves Revolutionary Performance
1. Zero-Copy Architecture: Data moves directly from kernel buffers to network without Python-level copying 2. Connection-Level Coroutines: One coroutine handles all requests on a connection, eliminating per-request overhead 3. Protocol-Level Backpressure: Natural flow control prevents buffer bloat and memory exhaustion 4. Minimal Abstraction: Raw socket access with clean abstractions, not framework magic
The 86x transfer rate advantage and consistent sub-100ms latency at 10,000 concurrent connections demonstrate that traditional web framework architectures have been leaving massive performance on the table.
All tests were conducted on identical hardware with identical payloads and security configurations.
Has anyone else achieved similar performance with different architectural approaches? What's your experience scaling Python web applications to 10,000+ concurrent connections?
TIL about blazeio.
blazeio: https://github.com/anonyxbiz/Blazeio
TechEmpower Framework Benchmarks > Round 23 (2025-02) > Data Updates: https://www.techempower.com/benchmarks/#section=data-r23&tes...
TechEmpower/FrameworkBenchmarks > wiki > Project-Information-Framework-Tests-Overview: https://github.com/TechEmpower/FrameworkBenchmarks/wiki/Proj...
TechEmpower/FrameworkBenchmarks > Python: https://github.com/TechEmpower/FrameworkBenchmarks/tree/mast...
Man with Brain Implant Controls Another Person's Hand–and Feels What She Feels
ScholarlyArticle: "Cortically Interfaced Human Avatar Enables Remote Volitional Grasp and Shared Discriminative Touch" (2025) https://www.medrxiv.org/content/10.1101/2025.09.21.25336267v...
Making EVs takes big energy, but after 2 years, they're cleaner than gas cars
How much less energy would it take to make (electric) vehicles out of carbon?
There now exist all carbon PM-free motors, all carbon and copper free cables, and there are organic and inorganic carbon-based substitutes for metal and plastic.
Are bioplastics less energy intensive in the automotive industry too?
Man Builds Two Billion FPS Camera That Records at the Speed of Light
"Visualizing video at the speed of light — one trillion frames per second" (2011; MIT) https://youtube.com/watch?v=EtsXgODHMWk&
"Single-shot real-time femtosecond imaging of temporal focusing" (2018) https://www.nature.com/articles/s41377-018-0044-7
"Filming the Speed of Light at 10 Trillion FPS" (2019; The Slow Mo Guys) https://youtube.com/watch?v=7Ys_yKGNFRQ&
OT video: "A laser pointer at 2 billion fps makes the speed of light look... kinda weird" (2025; AlphaPhoenix) https://youtube.com/watch?v=o4TdHrMi6do&
Ask HN: How much are you spending on your GPU in terms of energy?
I view the optimisation of GPU energy-consumption as an important state of the art problem.
I think it's really interesting to look at how the GPU market is evolving. TensorPool [1], as an example, who I'm not affiliated with, is a startup that is looking at lowering GPU inference costs.
I think there was some research in relation to energy consumption a couple of years back [2], but I've not noticed anything more recently, since, having briefly searched.
I'm really interested to hear the thoughts of the community in terms of energy costs and provisioning spend w.r.t. increasing usage over time.
[1] https://tensorpool.dev/ [2] GPT-4 energy consumption: https://www.sciencedirect.com/science/article/pii/S2542435123003653
A TPU is supposed to do more Tensor ops TOPS/wHr than a GPU.
Though, some GPUs have a TPU. For example Nvidia DLSS3 is a TPU.
"A PCIe Coral TPU Finally Works on Raspberry Pi 5" (2023) https://news.ycombinator.com/item?id=38310063
"ARM adds neural accelerators to GPUs" (2025) https://news.ycombinator.com/item?id=44919793
From "The von Neumann bottleneck is impeding AI computing?" (2025) https://news.ycombinator.com/item?id=45398473 :
> How does Cerebras WSE-3 with 44GB of 'L2' on-chip SRAM compare to Google's TPUs, Tesla's TPUs, NorthPole, Groq LPU, Tenstorrent's, and AMD's NPU designs?
Tensor Processing Unit: https://en.wikipedia.org/wiki/Tensor_Processing_Unit
..
- "Ask HN: Are you paying electricity bills for your service?" (2024) https://news.ycombinator.com/item?id=42454547 re: Zero Water datacenters
- "Show HN: LangSpend – Track LLM costs by feature and customer (OpenAI/Anthropic)" (2025-10) https://news.ycombinator.com/item?id=45771618
Show HN: rstructor, Pydantic+instructor for Rust
There are pydantic schemas for all of the Schema.org Linked Data RDFS vocabulary; that could also work in Rust: https://github.com/lexiq-legal/pydantic_schemaorg
A PR to add support for pydantic v2: https://github.com/lexiq-legal/pydantic_schemaorg/pull/14/fi...
sure. could definitely add that if someone needs it. thx for sharing!
Introducing architecture variants
"Gentoo x86-64-v3 binary packages available" (2024) https://news.ycombinator.com/item?id=39255458
"Changes/Optimized Binaries for the AMD64 Architecture v2" (2025) https://fedoraproject.org/wiki/Changes/Optimized_Binaries_fo... :
> Note that other distributions use higher microarchitecture levels. For example RHEL 9 uses x86-64-v2 as the baseline, RHEL 10 uses x86-64-v3, and other distros provide optimized variants (OpenSUSE, Arch Linux, Ubuntu).
Immutable releases are now generally available on GitHub
My instant reaction was: "Wait?! They weren't immutable before?"
I'm glad they're doing this, and it's an unpleasant surprise that they didn't already work this way. I don't understand why they allow mutable releases.
Git tags aren’t even really immutable, they’re treated as such but they’re not.
GitHub docs > Signing tags: https://docs.github.com/en/authentication/managing-commit-si... :
> You can sign tags locally using GPG, SSH, or S/MIME
$ git tag -s MYTAG -m "Signed tag"
# Creates a signed tag
$ git tag -v MYTAG
# Verifies the signed tag
Git book > 7.4 Git Tools - Signing Your Work: https://git-scm.com/book/ms/v2/Git-Tools-Signing-Your-Work : $ git commit -S -m 'Signed commit'But you can still delete and recreate/sign the same tag again.
Sigstore.dev supports revocation:
"Don’t Panic: A Playbook for Handling Account Compromise with Sigstore" (2022) https://blog.sigstore.dev/dont-panic-a-playbook-for-handling...
"Why you can’t use Sigstore without Sigstore" (2023) https://blog.sigstore.dev/why-you-cant-use-sigstore-without-... :
> Revocation in Sigstore. A recent post on this blog notes that signatures alone don’t tell you whether to trust an artifact; for that, you need a verification policy. This verification policy is a much more natural place to handle revocation than the identity layer; see Don’t Panic for an example. This allows us to avoid the scalability problems of global revocation lists (see CRLite for a discussion of these issues). The mantra here is revoke artifacts, not keys.
Artifact Attestation > Verifying an artifact attestation for binaries: https://docs.github.com/en/actions/how-tos/secure-your-work/... :
gh attestation verify PATH/TO/YOUR/BUILD/ARTIFACT-BINARY -R orgname/reponame
If it is not possible to retract/revoke releases then, there again, the installer MUST verify against a signed list of revoked releasesAligned Carbon Nanotube Arrays Revolutionize Terahertz Transistors
ScholarlyArticle:
"Terahertz metal–oxide–semiconductor transistors based on aligned carbon nanotube arrays." (2025) https://www.nature.com/articles/s41928-025-01463-6
Quantum Nanodiamonds from 1 Step, Industrial-Scale Pressure and Temp Process
ScholarlyArticle: "Quantum-Grade Nanodiamonds from a Single-Step, Industrial-Scale Pressure and Temperature Process" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adfm.20...
NewsArticle: "A faster, more affordable way to produce quantum nanodiamonds holds promise for medicine and industry" (2025) https://phys.org/news/2025-10-faster-quantum-nanodiamonds-me...
Rapid, low-temp nanodiamond formation by electron-beaming adamantane C–H bonds
ScholarlyArticle: "Rapid, low-temperature nanodiamond formation by electron-beam activation of adamantane C–H bonds" (2025) https://www.science.org/doi/10.1126/science.adw2025
NewsArticle: "Scientists just found a way to grow diamonds without heat or pressure" (2025) https://www.sciencedaily.com/releases/2025/10/251029002917.h...
Show HN: Run a GitHub Actions step in a gVisor sandbox
> Surprisingly enough, GitHub Actions with read-only permissions still receive a cache write token, allowing cache poisoning, so they are not safe to run untrusted code.
What are solutions to this and their tradeoffs?
1. Disallow cache write access to read-only actions
2. Stack caches such that read only action cache writes don't affect the cache for read-write actions
edit: What else would solve?
Show HN: LangSpend – Track LLM costs by feature and customer (OpenAI/Anthropic)
We're two developers who got hit twice by LLM cost problems and built LangSpend to fix it.
First: We couldn't figure out which features in our SaaS were expensive to run or which customers were costing us the most. Made it impossible to price properly or spot runaway costs.
Second: We burned 80% of our $1,000 AWS credits on Claude 4 (AWS Bedrock) in just 2 months while building prototypes of our idea but we had zero visibility into which experiments were eating the budget.
So we built LangSpend — a simple SDK that wraps your LLM calls and tracks costs per customer and per feature.
How it works: - Wrap your LLM calls and tag them with customer/feature metadata. - Dashboard shows you who's costing what in real-time - Currently supports Node.js and Python SDKs
Still early days but solving our problem. Try it out and let me know if it helps you too.
- https://langspend.com - Docs: https://langspend.com/docs - Discord: https://discord.gg/Kh9RJ5td
Additional useful metrics:
TOPS/Whr: Tensor ops per watt-hour
Tokens/Whr: LLM ingress|egress tokens per watt-hour
% green energy. If 100% offset by PPAs is 100% green, is 100% directly-sourced clean energy 100% or "200% green"?
CO2 cost
/carbon.txt: https://www.thegreenwebfoundation.org/tools/carbon-txt/ :
> carbon.txt is a single, discoverable location on any domain – /carbon.txt – for public, machine‑readable sustainability data.
thegreenwebfoundation/co2.js: https://github.com/thegreenwebfoundation/co2.js .. https://www.thegreenwebfoundation.org/co2-js/
Firefox Devtools Profiler uses CO2.js to estimate carbon cost: https://www.thegreenwebfoundation.org/news/carbon-emissions-...
TCS: Tech Carbon Standard > impact categories > upstream > Foundation Models,: https://www.techcarbonstandard.org/impact-categories/upstrea... :
> In addition to the carbon footprint of AI data centres, it is essential to mention their extensive water footprint, therefore a careful examination of data centre WUE_source is indispensable.
TCS Glossary: https://www.techcarbonstandard.org/resources/glossary#water-... :
> WUE_source: Water Usage Effectiveness Source:
> A metric used to measure how efficiently data centres use water for cooling and operations. WUE is quantified in cubic meters per megawatt hour of energy (m3/MWh), representing the amount of water consumed per unit of IT equipment output or computing work. To better understand the true water cost of data centres, source (offsite) and site-based (onsite) WUE metrics must be accounted for. The Green Grid distinguishes them as WUE and (WUE_source).
"WATER USAGE EFFECTIVENESS (WUE): A GREEN GRID DATA CENTER SUSTAINABILITY METRIC" The-Green-Grid-White-Paper-35-WUE-Usage-Guidelines.pdf https://airatwork.com/wp-content/uploads/The-Green-Grid-Whit... :
> WUE_source = ( Annual Source Energy Water Usage + Annual Site Water Usage ) / IT Equipment Energy *
> WUE = ( Annual Site Water Usage ) / IT Equipment Energy
[...]
Electricitymap has average carbon costs by region, but not yet water costs IIRC
"Ask HN: Are you paying electricity bills for your service?" (2024) https://news.ycombinator.com/item?id=42454547 re: Zero Water datacenters
From https://news.ycombinator.com/item?id=45363593 (2025) : microfluidics, Graphene based CPU coolers, graphene thermal pads,
What about model routing and could split testing or multi-armed bandit identify where cost can be reduced for acceptable loss in accuracy?
Do you already log inputs and outputs?
From https://news.ycombinator.com/item?id=45267271 :
> API facades like OpenLLM and model routers like OpenRouter have standard interfaces for many or most LLM inputs and outputs. Tools like Promptfoo, ChainForge, and LocalAI also all have abstractions over many models.
> What are the open standards for representing LLM inputs, and outputs?
> W3C PROV has prov:Entity, prov:Activity, and prov:Agent for modeling AI provenance: who or what did what when.
Why We're Beating Modsecurity
How does RhinoWAF compare to other open WAFs like OWASP Coraza WAF, bunkerweb, and SafeLine?
Does RhinoWAF support ModSecurity SecLang rulesets like OWASP CRS? Is there a SecLang to RhinoWAF JSON converter?
Shouldn't eBPF be fast at sorting and running rules?
What are good metrics for evaluating WAFs?
coraza: https://github.com/corazawaf/coraza
bunkerweb: https://github.com/bunkerity/bunkerweb
SafeLine: https://github.com/chaitin/SafeLine
RhinoWAF: https://github.com/1rhino2/RhinoWAF
gh topic: waf: https://github.com/topics/waf
awesome-WAF: https://github.com/0xInfection/Awesome-WAF
> What are good metrics for evaluating WAFs?
TPR: True Positive Rate (Detection Rate), TNT: True Negative Rate, FPR: False Positive Rate ("ROC Curve")
Accuracy = TP + TN / # Requests
Latency / Detection Time as percentiles
Throughput: response time in ms given requests per second
Time to Virtual Patch, and CI/CD rule deployment integration
DDoS Response Time: How quickly does the WAF mitigate a Layer 7 (application) DDoS attack?
... Rule Management Overhead: MTTT: Mean Time To Tune, Policy Complexity; CI/CD, SIEM/SOAR integration; https://gemini.google.com/share/0d2d1c53bfb0
Is there a good way to go from an OpenAPI / Swagger schema to WAF rules; and then to verify that the rules don't collide? IIUC eBPF does part of this
Re: eBPF WAF
awesome-ebpf > Kernel docs, examples, "eBPF/XDP hardware offload to SmartNICs", Go libraries: https://github.com/zoidyzoidzoid/awesome-ebpf#go-libraries
/? ebpf waf site:github.com https://www.google.com/search?q=+ebpf+waf+site%3Agithub.com
harporoeder/ebpfsnitch: "Linux Application Level Firewall based on eBPF and NFQUEUE" https://github.com/harporoeder/ebpfsnitch
ebpf-security/ebpf-https: "eBPF-https is an open source web application firewall (WAF)" https://github.com/ebpf-security/ebpf-https
cilium/cilium: https://github.com/cilium/cilium :
> Cilium is a networking, observability, and security solution with an eBPF-based dataplane. It provides a simple flat Layer 3 network with the ability to span multiple clusters in either a native routing or overlay mode. It is L7-protocol aware and can enforce network policies on L3-L7 using an identity based security model that is decoupled from network addressing.
Hosting SQLite Databases on GitHub Pages (2021)
I wonder if the author would use DuckDB WASM now?
From "Show HN: TeaTime – distributed book library powered by SQLite, IPFS and GitHub" https://news.ycombinator.com/item?id=42264274 :
>> phiresky/sql.js-httpvfs: https://github.com/phiresky/sql.js-httpvfs
>> mmomtchev/sqlite-wasm-http: https://github.com/mmomtchev/sqlite-wasm-http
>> This project is inspired from @phiresky/sql.js-httpvfs but uses the new official SQLite WASM distribution
duckdb/duckdb-wasm: https://github.com/duckdb/duckdb-wasm
"PSA: SQLite WAL checksums fail silently and may lose data" so that's probably not how to sync sqlite; https://news.ycombinator.com/item?id=44672902
electric-sql/electric: https://github.com/electric-sql/electric :
> Specifically, Electric is a read-path sync engine for Postgres. It syncs data out of Postgres into ... anything you like. The core sync protocol is based on a low-level HTTP API. This integrates with CDNs for highly-scalable data delivery.
electric-sql/pglite: https://github.com/electric-sql/pglite :
> Embeddable Postgres with real-time, reactive bindings.
"Using Postgres for Everything" https://news.ycombinator.com/item?id=40420474
The New Home for Blockly
EduBlocks by Anaconda is a block-based Python coding environment also built on google/blockly.
EduBlocks docs > Raspberry Pi Setup: https://docs.edublocks.org/docs/raspberry-pi-setup
edublocks-link: https://github.com/edublocks/edublocks-link :
> Helper application to enable code from EduBlocks to run on Raspberry Pi's
Robots – Comprehensive catalog of 115 humanoid and quadruped robots
mujoco_menagerie includes Mujoco physics simulator MJCF models for 54 robot models; 9 humanoids, 8 quadrepeds: https://github.com/google-deepmind/mujoco_menagerie#menageri...
mujoco_menagerie can also load any of the 135+ models supported by robot_descriptions FWIU
pypi:robot_descriptions: https://github.com/robot-descriptions/robot_descriptions.py
robot-descriptions/awesome-robot-descriptions: https://github.com/robot-descriptions/awesome-robot-descript... :
> A curated list of awesome robot descriptions in URDF, Xacro or MJCF formats.
IBM says quantum computing error correction algorithm can run on AMD chips
"IBM lays out clear path to fault-tolerant quantum computing". IBM Quantum Computing Blog. (2025-06). ; 2025 IBM Quantum Roadmap https://www.ibm.com/quantum/blog/large-scale-ftqc
- ( [2506.03094] Tour de gross: A modular quantum computer based on bivariate bicycle codes https://arxiv.org/abs/2506.03094 )
- [2506.01779] "Improved belief propagation is sufficient for real-time decoding of quantum memory" (2025) https://arxiv.org/abs/2506.01779 :
> Abstract: [...] Relay-BP is inherently parallel, enabling rapid low-footprint decoding with FPGA or ASIC real-time implementations, similar to standard BP. A core aspect of our decoder is its enhancement of the standard BP algorithm by incorporating disordered memory strengths. This dampens oscillations and breaks symmetries that trap traditional BP algorithms.
- "QUEKUF: An FPGA Union Find Decoder for Quantum Error Correction on the Toric Code". ACM Transactions on Reconfigurable Technology and Systems (2025) https://dl.acm.org/doi/10.1145/3733239
necst/QUEKUF: "Union Find Decoder for Quantum Error Correction on the Toric Code": https://github.com/necst/QUEKUF
China's analogue AI chip could work 1k times faster than Nvidia GPU: study
ScholarlyArticle: "Precise and scalable analogue matrix equation solving using resistive random-access memory chips" (2025) https://www.nature.com/articles/s41928-025-01477-0
ReRAM, RRAM: Resistive random-access memory: https://en.wikipedia.org/wiki/Resistive_random-access_memory :
> ReRAM bears some similarities to conductive-bridging RAM (CBRAM) and phase-change memory (PCM) in that they change dielectric material properties. CBRAM involves one electrode providing ions that dissolve readily in an electrolyte material, while PCM involves generating sufficient Joule heating to effect amorphous-to-crystalline or crystalline-to-amorphous phase changes. By contrast, ReRAM involves generating defects in a thin oxide layer, known as oxygen vacancies (oxide bond locations where the oxygen has been removed), which can subsequently charge and drift under an electric field. The motion of oxygen ions and vacancies in the oxide would be analogous to the motion of electrons and holes in a semiconductor.
Physicists Find Hidden Quantum Mirrors That Trap Light in 2D Materials
ScholarlyArticle: "Cavity electrodynamics of van der Waals heterostructures" (2024) https://arxiv.org/abs/2403.19745
; graphite / graphene optical cavity
From https://news.ycombinator.com/item?id=45282604 re: spiking neuromorphics:
> What are the ways to get spiking behavior out of integrated nanophotonics?
> Saturable Absorption (excitable semiconductor lasers, graphene laser cavity ,), NDR Negative Differential Resistance (RTD Resonant Tunneling Diodes,), PCM: Phase-change materials (DVD-RW,),
> Metamaterials and metasurfaces are probably useful for extreme nonlinear spiking neuromorphic computing with integrated nanophotonics.
Some optical metamaterials have picosecond phase change latency
Stress-testing model specs reveals character differences among language models
ScholarlyArticle: "Stress-Testing Model Specs Reveals Character Differences among Language Models" (2025) https://arxiv.org/abs/2510.07686
Why Dictators Are the Best Devs: Commands, Not Suggestions
Oh, but the dictator self-saboteurially railroads their ignorant bias without consideration and thereby wastes resources that could've been saved by asking questions and listening to experts.
An imperative and commanding tone works well [with LLMs], only when you actually know what you're doing.
> Oh, but the dictator self-saboteurially railroads their ignorant bias without consideration and thereby wastes resources that could've been saved by asking questions and listening to experts.
Yeah, but then he just jumps ship to safer waters while the rest of you drown.
Classical theories of gravity produce entanglement
"Classical theories of gravity produce entanglement" (2025) https://www.nature.com/articles/s41586-025-09595-7 :
> Abstract: [...] Here we extend the description of matter used in these theorems to the full framework of quantum field theory, finding that theories with classical gravity can then transmit quantum information and, thus, generate entanglement through physical, local processes.
NewsArticle: "Unifying physics just got harder: Study challenges fundamental test of quantum gravity" (2025) https://interestingengineering.com/science/unifying-physics-... :
> The concept at the center of this debate dates back to a 1957 proposal by Nobel laureate Richard Feynman, who suggested that if gravity could cause two massive objects to become quantumly entangled, then gravity itself must be quantum in nature. The idea has recently gained traction as advances in precision measurement make such tests experimentally feasible.
NewsArticle: "Does gravity produce quantum weirdness? Proposal divides physicists" (2025) https://www.nature.com/articles/d41586-025-03381-1
On gravity;
Gravity from QFT:
> This says that the standard model actually does describe the n-body orbits of the planets:
> "Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 .. https://news.ycombinator.com/item?id=45220460
> There's also this:
Fluid models also predict gravity:
> "Fluid vacuum yields exact solutions to Pioneer anomaly and Mercury's perihelion (2019)" https://news.ycombinator.com/item?id=45220585
Diamond Thermal Conductivity: A New Era in Chip Cooling
"Diamond Blankets Will Keep Future Chips Cool" (2025) https://spectrum.ieee.org/diamond-thermal-conductivity
Are diamond blankets necessary for cooling graphene semiconductors, which are much less thermally wasteful?
Scientists Discover New Path to Room-Temperature Superconductors
> Until now, the BCS theory based on the formation of Cooper pairs and DFT predictions based on quantum mechanics have remained separate. Liu’s team found a way to connect them.
ScholarlyArticle: "Revealing symmetry-broken superconducting configurations by density functional theory" (2025) https://iopscience.iop.org/article/10.1088/1361-6668/adedbc
Fedora Plans to Block Unsigned RPM Packages by Default
That's a good step in the right direction. Curious if they will ever remove the PGP keys out of the public mirrors. That would be my next step. RPM's can be resigned with any keys and said keys can be replaced in a mirror. It would eventually be caught but I prefer not to have people accustom to these dark patterns in the first place. Keys used to validate RPMs need to be served from locked down servers and companies with internal mirrors must find a secure way to cache and serve them internally.
From https://man7.org/linux/man-pages/man5/yum.conf.5.html :
[ $ man yum.conf | grep -C 5 -i gpg ]
gpgkey list of strings
URLs of a GPG key files that can be used for signing
metadata and packages of this repository, empty by default.
If a file can not be verified using the already imported
keys, import of keys from this option is attempted and the
keys are then used for verification.
gpgkey_dns_verification
Should the dnf attempt to automatically verify GPG
verification keys using the DNS system. This option
requires the unbound python module (python3-unbound) to be
installed on the client system. This system has two main
features. The first one is to check if any of the already
installed keys have been revoked. Automatic removal of the
key is not yet available, so it is up to the user, to
remove revoked keys from the system. The second feature is
automatic verification of new keys when a repository is
added to the system. In interactive mode, the result is
written to the output as a suggestion to the user. In
non-interactive mode (i.e. when -y is used), this system
will automatically accept keys that are available in the
DNS and are correctly signed using DNSSEC. It will also
accept keys that do not exist in the DNS system and their
NON-existence is cryptographically proven using DNSSEC.
This is mainly to preserve backward compatibility.
Default is False.
RPM packages' GPG key(s) can be specified in a .repo file, which can be updated by an RPM package from a repo with or without mandatory signing configured. Typically, all packages in a repo are built with CI build containers that all share the same signing key.How to bootstrap the [Sigstore [TLS] pubkey, HKP (TLS) pubkey], to verify the [Sigstore hash, GPG .asc signature] of the manifest containing the [Sigstore, SHA-X] hash for each package and/or package file?
Also recent: "RPM 6.0 Released with OpenPGP Improvements and Signature Checking by Default" (2025-09) https://news.ycombinator.com/item?id=45354285
The future of Python web services looks GIL-free
C code needs to be updated to be safe in a GIL free execution environment. It is a lot of work! The pervasive problem is that mutable data structures (lists, dict etc) could change at any arbitrary point while the C code is working with them, and the reference count for others could drop to zero if *anyone* is using a borrowed reference (common for performance in CPython APIs). Previously the GIL protected where those changes could happen. In simple cases it is adding a critical section, but often there multiple data structures in play. As an example these are the changes that had to be done to the standard library json module:
https://github.com/python/cpython/pull/119438/files#diff-efe...
This is how much of the standard library has been audited:
https://github.com/python/cpython/issues/116738
The json changes above are in Python 3.15, not the just released 3.14.
The consequences of the C changes not being made are crashes and corruption if unexpected mutation or object freeing happens. Web services are exposed to adversity so be *very* careful.
It would be a big help if CPython released a tool that could at least scan a C code base to detect free threaded issues, and ideally verify it is correct.
> It would be a big help if CPython released a tool that could at least scan a C code base to detect free threaded issues, and ideally verify it is correct.
Create or extend a list of answers to:
What heuristics predict that code will fail in CPython's nogil "free threaded" mode?
Some of that is already around, but scattered across multiple locations. For example there is a list in the Python doc:
https://docs.python.org/3/howto/free-threading-extensions.ht...
And a dedicated web site:
https://py-free-threading.github.io/
But as an example neither include PySequence_Fast which is in the json.c changes I pointed to. The folks doing the auditing of stdlib do have an idea of what they are looking for, and so would be best suited to keep a list (and tool) up to date with what is needed.
Twake Drive – An open-source alternative to Google Drive
Do you really need a database for this? On a unix system, you should be able to: CRUD users, CRUD files and directories, grant permissions to files or directories
Is there a decade-old software that provides a UI or an API wrapper around these features for a "Google Drive" alternative? Maybe over the SAMBA protocol?
How would you implement things like version history or shareable URLs to files without a database?
Another issue would be permissions: if I wanted to restrict access to a file to a subset of users, I’d have to make a group for that subset. Linux supports a maximum of 65536 groups, which could quickly be exhausted for a nontrivial number of users.
As for the permissions, using ACLs would work better here. Then you don't need a separate group for every grouping.
TIL about ACLs! I think that would nicely solve the group permission issue.
Then let me also introduce you to extended attributes, aka xattrs. That's how the data for SELinux is stored.
There is no support for writing multiple xattrs in one transaction.
There is no support for writing multiple xattrs and file contents in one transaction.
Journaled filesystems that immediately flush xattrs to the journal do have atomic writes of single xattrs; so you'd need to stuff all data in one xattr value and serialize/deserialize (with e.g JSON, or potentially Arrow IPC with Feather ~mmap'd from xattrs (edit: but getxattr() doesn't support mmap. And xattr storage limits: EXT4: 4K, XFS: 64k, BTRFS: 16K)
Atomicity (database systems) https://en.wikipedia.org/wiki/Atomicity_(database_systems)
Does the US have enough graphite to meet growing energy demand?
> One of those critical materials is graphite, a mineral used in the electrodes of batteries used for electric vehicles and in stationary storage for the grid. Currently, all battery-grade graphite in the U.S. is sourced from abroad.
IDK how many times I've mentioned anodes made from hemp bast fiber are as good or better than battery anodes made from graphene. Bast fiber is already naturally branching.
Graphene can be made from graphite or any other source of carbon.
Graphene can be manufactured by flash heating unsorted plastic.
Graphene can be manufactured with hydrogen plasma and unsorted plastic.
Graphite and Graphene can be made from CO2.
Why formalize mathematics – more than catching errors
> While Paulson focuses on the obvious benefit of finding potential errors in proofs as they are checked by a computer, I will discuss some other less obvious benefits of shifting to formal math or “doing math with computers”
From https://news.ycombinator.com/item?id=44214804 sort of re: Tao's Real Analysis formalisms:
> So, Lean isn't proven with HoTT either.
Intel hamstrung by supply shortages across its business
They should fab carbon-based chips to eliminate supply chain limits, decrease resistivity, and reduce thermal waste.
CNT (Carbon Nanotubes) on rGO (reduced Graphene Oxide) wafers should work due to the difference in work functions between each form of carbon.
Semiconductor fabrication with (SiC) Silicon Carbide is already demonstrated.
Carbon epoxide (C_n H_2n O_n) would probably also be a sufficient substrate for electronic computing.
/?hnlog graphene, out of graphene :
- "Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490
Willow quantum chip demonstrates verifiable quantum advantage on hardware
but can it run Doom?
Linux would be a start :-)
FWIU a 6-stage RISC processor is sufficient to run Linux.
Things like CUDA-Q may be faster on classical computers than on quantum computers for forever; though what CUDA-Q solves for is also an optimization problem?
> though what CUDA-Q solves for is also an optimization problem?
"Optimization by decoded quantum interferometry" (2025) https://www.nature.com/articles/s41586-025-09527-5 .. https://news.ycombinator.com/context?id=45688122
Optimization by Decoded Quantum Interferometry
"Optimization by decoded quantum interferometry" (2025) https://www.nature.com/articles/s41586-025-09527-5 :
> Abstract: [...] Here we introduce decoded quantum interferometry (DQI), a quantum algorithm that uses the quantum Fourier transform to reduce optimization problems to decoding problems. When approximating optimal polynomial fits over finite fields, DQI achieves a superpolynomial speed-up over known classical algorithms
Antislop: A framework for eliminating repetitive patterns in language models
ScholarlyArticle: "Antislop: A Comprehensive Framework for Identifying and Eliminating Repetitive Patterns in Language Models" (2025) https://arxiv.org/abs/2510.15061 :
> Abstract: [...] Our approach combines three innovations: (1) The Antislop Sampler, which uses backtracking to suppress unwanted strings at inference time without destroying vocabulary; (2) An automated pipeline that profiles model-specific slop against human baselines and generates training data; (3) Final Token Preference Optimization (FTPO), a novel fine-tuning method that operates on individual tokens, surgically adjusting logits wherever a banned pattern has appeared in an inference trace.
From https://news.ycombinator.com/item?id=45546037#45585680 , an additional potential method:
>> Could build a simple heuristic: if similar memory content gets created/updated N times within short timeframe, flag it as potential loop
Forging Fedora's Future with Forgejo
Forgejo is a fork of Gitea is a fork of Gogs is a clone of old GitHub in Go.
Why the Forgejo fork?
Gitea Actions and Forgejo Actions build from GitHub Actions YAML with with nektos/act.
nektos/act: https://github.com/nektos/act
What would be the new commands to locally build from an SRPM and locally sign the built RPM?
Gitea/Forgejo has an OCI Image registry / Artifact registry, which supports signatures for any sort of artifact. Fedora has RPM 6 with the updated GPG support.
Would there be any value to using OCI artifact registries for RPM packages, and how would mirroring work?
Drop into REPL when your Python script crashes
pytest has a --pdb flag:
pip install pdbpp
pytest --pdb
pdbpp: https://github.com/pdbpp/pdbpppytest docs > How to handle test failures > Using pdb — The Python Debugger with pytest > Dropping to pdb on failures: https://docs.pytest.org/en/stable/how-to/failures.html
Windows utilities go on every machine I set up
AI Slop.
Here's this from my setup_windows.ps1 powershell script for maintaining windows: https://github.com/westurner/dotfiles/blob/develop/scripts/s... ;
powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -InstallPSWindowsUpdate
powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -UpdateWindows
powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -InstallWSL
powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -InstallChoco
powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -InstallChocoPackage
So then the regular tasks are -UpdateWindows and -UpdateChocoPackages : powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -UpdateWindows
powershell.exe -executionpolicy unrestricted -file ./setup_windows.ps1 -UpdateChocoPackagesMagnetized plasmas offer a new handle on nanomaterial design
"Electron magnetization effects on carbonaceous dusty nanoparticles grown in Ar−C2H2 capacitively coupled nonthermal plasma" (2025) https://arxiv.org/abs/2504.21217
DeepTutor – Chat with your research library (Zotero fork)
What is possible by integrating chat into Zotero (is it XUL?) that's not yet possible with e.g. pyzotero?
pyzotero, paperqa.contrib.ZoteroDB,
Paperai supports "Report Schemas" in YAML that could be useful for systematic reviews
IIRC the Zotero Connector doesn't work on students' Chromebooks.
Tabulator was a cool extension. A clone of Tabulator as a Web Extension that has local storage with a remote sync option would be useful.
Zotero does bibliographies with CiteProc and CSL Citation Style Language;
CiteProc: https://en.wikipedia.org/wiki/CiteProc
It is possible! But we still think a new independent app could give us more freedom to build the "Vibe Reading" features
As electricity bills rise, candidates in both parties blame data centers
What won't be blamed is rapacious capitalism and the abandonment of consumer protections in the utility space. People warned of this 30 years ago when the market model was adopted for utilities. Well, here we are.
Why are they charging customers near the datacenters more than others? Why don't they charge their customers the same rate?
I don't think we actually have a market economy for electricity in the US. Which electricity markets in the US have competition instead of government-granted anticompetitive monopolies (that are failing to solve for intraday storage)?
(The Carter administration was already starting to deregulate DOE which their administration created, for example.)
EU has more of market economy for electricity: you must have intraday electricity rates to be a member state of EU. (SIDC)
(Edit)
What percentage of US electricity markets have more than one supplier?
From "State-By-State Scorecard on Electricity Competition" (2025) https://www.rstreet.org/research/state-by-state-scorecard-on... :
> Active competition promotes efficiency and innovation, and this is as true in the electric power industry as it is elsewhere in society.
The incumbent government-created electricity monopoly here is allowed to prohibit customers from using their own solar electricity, with mandatory "Buy-All Sell-All" service agreements.
The sole electricity supplier in the region threatens to cancel service to customers for using the renewable electricity that they generate.
Astrocytes are the superstars of long-term memory: multi-day trace stabilizers
ScholarlyArticle: "The astrocytic ensemble acts as a multiday trace to stabilize memory" (2025) https://www.nature.com/articles/s41586-025-09619-2
Does this multi-day interval correspond to any of the intervals estimated in studies of neural representation drift?
From yesterday regarding carbon microtubules, quantum cognition, and representation drift: https://news.ycombinator.com/item?id=45620897
How do astrocytes affect representation drift?
The drift is probably oscillatory in nature. It's process affecting material. Astrocytes and nanotubules don't directly affect the drift, they are affected by it simply as the memory is shifted by a material/process interaction.
I haven't heard that they've identified period(s) of or resonance from as a cause of representation drift.
Is this fair to say:? Astrocyte activations are more stable for a longer period of time than other neuroactivations.
It's a theory of drift postulated by neurobiologists.
Unsure if they're more stable.
IMHO the cortex is the superstar of long-term memory because of spontaneous recovery of LTM due to redundant storage in the cortex
You've got to settle the memory in during this days long process first, and that's being revised through sharp wave ripples.
That appears to be true for STM but IDK about LTM?
- "Study shows how memories ripple through the brain" (2017) https://www.ninds.nih.gov/news-events/news/press-releases/st...
"Learning-enhanced coupling between ripple oscillations in association cortices and hippocampus" (2017) https://www.science.org/doi/10.1126/science.aan6203 ... NeuroGrid
- "Brain found to store three copies of every memory" (2024) https://news.ycombinator.com/item?id=41352124 :
> So that makes four (4) copies of each memory in the brain if you include the engram cells in the prefrontal cortex
Good to read Rhythms of the Brain for how SWR creates memories.
Cyberpsychology's Influence on Modern Computing
- Cyberpsychology: https://en.wikipedia.org/wiki/Cyberpsychology
- "The Psychology of Cyberspace" (1996,) https://doi.org/10.23668/psycharchives.10362
[dead]
Every Language Model Has a Forgery-Resistant Signature
Semantics but the article is describing model "fingerprints" for basically stylometrics; that's a partial not a signature.
These are "signatures":
https://github.com/sigstore/model-transparency
"Model authenticity and transparency with Sigstore" (2025) https://next.redhat.com/2025/04/10/model-authenticity-and-tr...
WebMCP
W3C specs are written with respec: ReSpec docs: https://respec.org/docs/#w3c-documents
W3C Process document > 3.4. Chartered Groups: Working Groups and Interest Groups: https://www.w3.org/policies/process/#GAGeneral
There's WebGPU, WebNN, window.ai, Prompt API, Summarizer API, Writer API, Rewriter API, Language Detector API, Translator API ; and now WebMCP
WebNN: https://www.w3.org/TR/webnn/
webmachinelearning/prompt-api > "Explainer for the Prompt API": https://github.com/webmachinelearning/prompt-api
https://developer.chrome.com/docs/ai/built-in :
> Standardization effort: We're working to standardize all of these APIs for cross-browser compatibility.
> The Language Detector API and Translator API have been adopted by the W3C WebML Working Group. We've asked Mozilla and WebKit for their standards positions.
> The Summarizer API, Writer API, and Rewriter API have also been adopted by the W3C WebML Working Group. We've asked asked Mozilla and WebKit for their standards positions.
webmachinelearning/webmcp: https://github.com/webmachinelearning/webmcp
jasonjmcghee/WebMCP: https://github.com/jasonjmcghee/WebMCP
Having worked on at least one web app with a name that started with "Web", I'm not surprised.
/? mcp chrome: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... :
- "Show HN: We packaged an MCP server inside Chromium" (today) https://news.ycombinator.com/item?id=45618536 re: browseros-mcp: https://github.com/browseros-ai/BrowserOS/blob/main/docs/bro...
- "Chrome DevTools (MCP) for your AI agent" https://developer.chrome.com/blog/chrome-devtools-mcp .. https://news.ycombinator.com/item?id=45412734 (September 2025) .. :
> We're launching today a public preview for the new Chrome DevTools Model Context Protocol (MCP) server, bringing the power of Chrome DevTools to AI coding assistants.
> Coding agents face a fundamental problem: they are not able to see what the code they generate actually does when it runs in the browser. They're effectively programming with a blindfold on.
> The Chrome DevTools MCP server changes this. AI coding assistants are able to debug web pages directly in Chrome, and benefit from DevTools debugging capabilities and performance insights. This improves their accuracy when identifying and fixing issues.
How could the Chrome DevTools MCP be integrated with the Gemini Computer Use model?
From https://news.ycombinator.com/item?id=45543923 :
> Competency Story: The customer and product owner can write BDD tests in order to validate the app against the requirements
> Prompt: Write playwright tests for #token_reference, that run a named factored-out login sequence, and then test as human user would that: when you click on Home that it navigates to / (given browser MCP and recently the Gemini 2.5 Computer Operator model)
"Introducing the Gemini 2.5 Computer Use model" (October 2025) https://blog.google/technology/google-deepmind/gemini-comput...
Could this help with accessibility reviews?
"Lighthouse accessibility score" https://developer.chrome.com/docs/lighthouse/accessibility/s...
awesome-a11y > Tools: https://github.com/brunopulis/awesome-a11y/blob/main/topics/...
This explains all the new random GPO settings I had to go disable at the office this week! (A lot of users are reporting performance issues with browsers, seems like all the browsers are adding AI things... seems like a good place to start.)
This is as bad or worse than agreeing to voice search with them.
Hadn't realized we've all been opted-in.
My voice assistant used to be able to create a reminder without siphoning everything out to "must be reviewed because it's AI" remote AI.
Is it possible to use non-AI voice search on YouTube (with GoogleTV) without signing one's life away?
Try voice searching for "weather in [city]" with YT on GTV: it launches another (Google) app instead of just adding text to the search field.
When they asked for suggestions for OpenAI's fork of Chromium, I suggested adding fuzzy and regex search in a drawer and sending it upstream; like vimgrep for Chromium. That would help solve for Search, like the original mission of the company.
Subwavelength phase engineering deep inside silicon
"Subwavelength phase engineering deep inside silicon" (2025) https://iopscience.iop.org/article/10.1088/2515-7647/adf7ef/... :
> Abstract: [...] We design and numerically demonstrate a volumetric metaoptic monolithically embedded within the bulk, achieving full 2π phase control at telecommunication wavelengths, with simulated transmission efficiencies reaching 90%. The architecture is guided by a semi-analytical Fabry–Pérot model and validated through full-wave simulations. Arrays of 250 nm-wide metaatoms spaced at 300–410 nm pitch yield a focusing efficiency of 70%. With the wafer surface left pristine, this platform can potentially enable co-integration with electronics, MEMS/NEMS, and conventional metasurfaces. Moreover, the method is directly transferable to other transparent dielectrics compatible with ultrafast laser writing. These results establish a CMOS-compatible blueprint for three-dimensional nanophotonics and multi-level integration within the wafer.
Near-Field Optical Nanopatterning of Graphene
"Near-Field Optical Nanopatterning of Graphene" (2025) https://onlinelibrary.wiley.com/doi/10.1002/smsc.202500184 :
> Abstract: [...] By finely tuning experimental parameters such as laser exposure time, the nanopatterning feature size ranging from 1–30 nm, and the resulting shapes from nanoscale elevated structures (nanoblister shape) to punched holes can be precisely modulated. This nanopatterning strategy achieves feature sizes at the sub-10 nm scale and represents an advancement toward fabricating all-2D material devices, setting new benchmark in nanoscale manufacturing for quantum and photonic technologies.
Intercellular communication in the brain through a dendritic nanotubular network
Penrose’s vindication: In a broad philosophical sense. His intuition that quantum effects might play some role in cognition seems less far-fetched now than it did 30 years ago.
But vindication of Orch OR specifically (microtubule-based quantum gravity collapses driving consciousness) not yet.
https://royalsocietypublishing.org/doi/10.1098/rsta.1998.025...
The OP's article does a lot more to disprove such a hypothesis by instead offering a more credible alternative explanation:
Neurons found in the CNS have tubles large enough to allow transport of ions and even relatively large polypeptides similar to, but more permissive than, the well-known gap junctions found between smooth muscle and cardiac muscle cells.
Penrose's hypothesis is crank science about quantum gravity messing with your CNS in a way comparable to "body thetans" in Scientology.
But does this help explain Representational drift?
From "Concept cells help your brain abstract information and build memories" https://news.ycombinator.com/item?id=42784396 :
> the regions of the brain that activate for a given cue vary over time
"Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882...
>> Future work should characterize drift across brain regions, cell types, and learning.
How do nanotubules in the brain affect representation drift?
There is EMF to cognition given that, for example, "Neuroscience study shows the brain emits light through the skull" (2025) https://news.ycombinator.com/item?id=44697995
Aren't there certainly quantum effects in the EMF wavefield of and around the brain?
The common understanding is that at the molecular scale that your nervous system operates, quantum effects are averaged out and don't lead to instability of neuronal activity.
Tegmark has used actual, you know, numbers and stuff to show that quantum effects in the brain are pretty implausible.
Again there, does the EMF/RF field created by the electrovolt wave function of the brain affect the electrovolt wave function of the brain? If so, isn't that a feed-forward feedback loop (where there may be quantum behavior)?
Does this paper also fail to assess other fields relevant to understanding nonlocal neuroactivation in disproving that there is any quantumness in cognition?
How do humans simulate digital and quantum circuits with the brain?
And, why do attempts to localize activations in the brain weeks apart fail; why is there representation drift?
Actual evidence of:
/? quantum in the brain: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=qua...
/? quantum cognition: https://www.google.com/search?q=quantum+cognition
gh topic: quantum-cognition: https://github.com/topics/quantum-cognition (2025: 7 results; all Julia)
2000: the referenced Tegmark paper
From https://scholar.google.com/scholar?q=related:-mGt9tzYwSUJ:sc... :
- 1998: "Quantum computation in brain microtubules? The Penrose–Hameroff 'Orch OR 'model of consciousness" (1998)
- 2002: "Quantum computation in brain microtubules: Decoherence and biological feasibility" (2002)
Quantum cognition: https://en.wikipedia.org/wiki/Quantum_cognition
The first fully recyclable, sub-micrometer printed electronics
ScholarlyArticle: "Capillary flow printing of submicrometre carbon nanotube transistors" (2025) https://www.nature.com/articles/s41928-025-01470-7
[deleted]
Show HN: A large format XY scanning hyperspectral camera
Applications for:
"Multispectral imaging through scattering media and around corners via spectral component separation" (2024) https://opg.optica.org/oe/fulltext.cfm?uri=oe-32-27-48786&id... .. https://news.ycombinator.com/item?id=42557904
"Multi-sensor characterization for an improved identification of polymers in WEEE recycling" (2024) [WEE: e-waste] https://news.ycombinator.com/item?id=42534637 ..
"Reversible optical data storage below the diffraction limit" (2023) https://news.ycombinator.com/item?id=42331986 :
>> [...] This is possible by multiplexing the storage in the spectral domain.
"Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
>> This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ).
"A self-healing multispectral transparent adhesive peptide glass" https://www.nature.com/articles/s41586-024-07408-x :
> Moreover, the supramolecular glass is an extremely strong adhesive yet it is transparent in a wide spectral range from visible to mid-infrared. This exceptional set of characteristics is observed in a simple bioorganic peptide glass composed of natural amino acids, presenting a multi-functional material
Further study:
/? hyperspectral, specra-: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Related applications:
Was looking into phase imaging and the four Stokes parameters S0,S1,S2,S3 and so on in assessing applications for the Parallel Axis Theorem: 2x2 "superpixel" imaging can capture the ((-45, 45), (L Circuluar, R Circular)) polarization information necessary to infer phase (and non-quantum optical entanglement is polarization)
Is polarimetric imaging (per-pixel polarization information) hyperspectral or hyperspectropolarimetrical?
spectropolarimetry: https://www.google.com/search?client=firefox-b-1-d&q=spectro...
Spectropolarimetry -> Polarimetry: https://en.wikipedia.org/wiki/Polarimetry
For example a division-of-focal-plane (DoFP) camera / image sensor has 2x2 pixels.
MIT physicists improve the precision of atomic clocks
I’ve spent a decent chunk of my career wrestling with time sync — NTP/PTP, GPS, timezones, all that fun stuff. For real world network time infrastructure, where do we actually hit diminishing returns with clock precision? Like, at what point does making clocks more precise stop helping in practice?
Asking partly out of curiosity, I have been toying with a future pet project ideas around portable atomic clocks, just to skip some of the headaches of distributed time sync altogether. Curious how folks who’ve worked on GPS or timing networks think about this.
I guess very few systems have better absolute time than a few microseconds. Those systems are probably exclusively found in HFT and experimental physics.
This past week I tried synchronizing the time of an embedded Linux board with a GPS PPS signal via GPIO. Turns out the kernel interrupt handler already delays the edge by 20 us compared to busy polling the state of the pin. Stuff then gets hard to measure at sub microsecond scales.
From https://news.ycombinator.com/item?id=44054783 :
> "Re: ntpd-rs and higher-resolution network time protocols {WhiteRabbit (CERN), SPTP (Meta)} and NTP NTS : https://news.ycombinator.com/item?id=40785484 :
>> "RFC 8915: Network Time Security for the Network Time Protocol" (2020)
Yes, I'm aware of some of these developments. Impressive stuff, just not the level of precision on achieves tinkering for a few days with a basic gnss receiver.
Is this (the OT [1]) with ytterbium a more or less efficient way to count clock ticks with high precision than is described in [2]?
[1] "Quantum-amplified global-phase spectroscopy on an optical clock transition" (2025) https://www.nature.com/articles/s41586-025-09578-8
[2] "Quantum watch and its intrinsic proof of accuracy" (2022) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev...
Method to directly generate photons in optical fiber could secure quantum net
ScholarlyArticle: "Selective excitation of a single rare-earth ion in an optical fiber" (2025) https://opg.optica.org/oe/fulltext.cfm?uri=oe-33-19-41011
NewsArticle: "Streamlined method to directly generate photons in optical fiber could secure future quantum internet" (2025) https://phys.org/news/2025-10-method-generate-photons-optica...
Ruby core team takes ownership of RubyGems and Bundler
Decentralized package hosting is the only way.
The key question here is how exactly the supply chain attacks will be prevented. If you consider release of new version of a library some sort of transaction, it's easy to see then the difference with cryptocurrencies: in crypto transaction can be automatically verified, but with software releases it is impossible. It is hard to imagine hundreds of hostings on the same very high trust level, so either risks become significant or there are several, but not many hostings which everyone can trust. If Number of hostings << Number of users, then it's not truly decentralized and there still exists a different risk, when there's some sort of political split between some of them. Summarizing all of that, I don't know if decentralization is a solution at all. Transparent community ownership over a centralized solution is much better.
> The key question here is how exactly the supply chain attacks will be prevented
By using signed packages. Why is this even a question.
Can Gems be served from OCI Container/Artifact registries, which (also) already support signatures?
From https://news.ycombinator.com/item?id=44991636 :
> Native Containers are bare-metal host images as OCI Images which can be stored in OCI Container Registries (or Artifact registries because packages too). GitHub, GitLab, Gitea, GCP, and AWS all host OCI Container/Artifact Registries
So, packages there too would simplify.
Re: "RPM 6.0 Released with OpenPGP Improvements and Signature Checking by Default" (2025) and Sigstore and PyPI and SLSA.dev and key revocation transparency: https://news.ycombinator.com/item?id=45354568
Nerdctl supports various snapshot, lazy start, and distributed cloud storage container stores: https://news.ycombinator.com/item?id=45270468
Ruby has:
gem cert --build your@email.com
gem install gemname -P HighSecurity
And also for signatures now there's sigstore-ruby and Trusted Publishing.sigstore-ruby: https://github.com/sigstore/sigstore-ruby
guides.rubygems.org/trusted-publishing: https://guides.rubygems.org/trusted-publishing/ :
> Trusted publishing is a mechanism for uploading gems to RubyGems.org without using long-lived secret credentials. [..]
> Trusted Publishing is a term for using OpenID Connect (OIDC) to exchange short-lived identity tokens between a trusted third-party service and RubyGems.org. This allows obtaining short-lived API tokens in an automated environment (such as CI) without having to store long-lived API tokens or username/password credentials.
Microwave technique allows energy-efficient chemical reactions
"Focused thermal energy at atomic microwave antenna sites for ecocatalysis" (2025) https://www.science.org/doi/10.1126/sciadv.ady4043
How I Accidentally Created the Fastest CSV Parser Ever Made
[deleted]
This may be the fastest small tabular data model for classification and regression at present; to go fast: "Show HN: TabPFN v2 – A SOTA foundation model for small tabular data" (2024) https://news.ycombinator.com/item?id=42647343
Walks in Rotation Spaces Return Home When Doubled and Scaled
"Walks in Rotation Spaces Return Home when Doubled and Scaled" (2025) https://journals.aps.org/prl/abstract/10.1103/xk8y-hycn :
> Abstract: The dynamics of numerous physical systems, such as spins and qubits, can be described as a series of rotation operations, i.e., walks in the manifold of the rotation group. A basic question with practical applications is how likely and under what conditions such walks return to the origin (the identity rotation), which means that the physical system returns to its initial state. In three dimensions, we show that almost every walk in SO(3) or SU(2), even a very complicated one, will preferentially return to the origin simply by traversing the walk twice in a row and uniformly scaling all rotation angles. We explain why traversing the walk only once almost never suffices to return, and comment on the problem in higher dimensions.
NewsArticle: "Mathematicians have found a hidden 'reset button' for undoing rotation" (2025) https://www.newscientist.com/article/2499647-mathematicians-...
Standard Model and General Relativity Derived from Mathematical Self-Consistency
Is this a citation for https://news.ycombinator.com/context?id=45585654 ?
I added notes there about additional considerations re: tests of alternatives to relativity and other derivations, but it was flagged; and the referenced repo doesn't appear to exist?
From https://news.ycombinator.com/item?id=45585654 :
> 137 pages of proofs. Open‑source implementations. Fully reproducible. [...]
> Paper: [link] Code: github.com/[…]/SimpleUniverse
> Judge for yourself. The equations don’t lie.
/? site:github.com inurl:SimpleUniverse : https://www.google.com/search?q=site%3Agithub.com+inurl%3ASi... : 0 results today
But this is a real citation, so:
HN title: "Standard Model and General Relativity Derived from Mathematical Self-Consistency"
ScholarlyArticle: "The Self-Consistent Coherence-Maximizing Universe: Complete Derivation of the Standard Model and General Relativity from Mathematical Self-Consistency" (2025) https://www.academia.edu/144466150/The_Self_Consistent_Coher...
0 scholar results tho: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=The...
Can confirm (after logging into Academia.edu to read the article) that there is indeed a 137 page ScholarlyArticle PDF; but unlike .ps+.PDF on ArXiV, it looks like it's not possible to copy/paste the abstract;
"The Self-Consistent Coherence-Maximizing Universe: Complete Derivation of the Standard Model and General Relativity from Mathematical Self-Consistency" (2025) https://www.academia.edu/144466150/The_Self_Consistent_Coher... :
> Abstract: We derive the complete structure of fundamental physics from a single principle: Quantum coherence maximization under self-consistency constraints. [...]
That sounds consistent with observed retrocausality.
> Keywords: coherence maximization, golden ratio, Standard Model, General Relativity, holographic principle, E8 symmetry, zero free parameters
> 1. Holographic Architecture: The 2+1D World-Hologram
> Falsifiable If: [...] Quantum computer fails to reproduce
In the other HN post: https://news.ycombinator.com/item?id=45585654 , it says:
> Tested on quantum computers. TFIM critical point converges to 1/phi in the thermodynamic limit.
Which quantum computer is this tested on, and how? Is there Python code in Cirq or QISKit or Tequila, for example?
Automatic K8s pod placement to match external service zones
Couldn't something like this make CI builds faster by running builds near already-cached container images?
Are you thinking about already-cached container images on the host level ? Not sure how AZP fits in here?
Since you mentioned it, what I've done before when it comes to improving CI builds, is to use karpenter + local SSD mounts with very large instance types in an idle timeout of ~1h. This allowed us to have very performant build machines at a low cost. The first build of the day took a while to get going, but for the price-benefit perspective it was great.
Are the container image repositories and the container images also "external resources" that could make CI build pod placement more efficient?
Thanks; that sounds faster than most self-hosted CI services.
If the image repositories were AZ bound resources, that would make the CI build process more efficient.
Or, if the resources that CI build is utilizing within the image (after the image is pulled and started) is AZ bound, then yes the build process would be improved since the CI build would fetch AZ local resources, rather than crossing the AZ boundary
A Gemma model helped discover a new potential cancer therapy pathway
Other potential cancer treatment methods that 2.5pro - a different model than is referenced in the article - has confirmed as potentially viable when prompted by an amateur cancer researcher:
- EPS3.9: Polysaccharide (deep sea bacterium sugar, fermentable, induces IFN-1) causes Pyroptosis causes IFN-1 causes Epitope Spreading (which is an amplifying effect) causes anti-cancer response.
- CPMV; Cow-Pea Mosaic Virus (is a plant virus that doesn't infect humans but causes an (IFN-1 (IFN-alpha and a lot of IFN-beta)) anti-cancer response in humans. Cow Pea consumption probably used to be even more prevalent in humans before modern agriculture; cow peas may have been treating cancer in humans for thousands of years at least.)
I emailed these potential new treatments to various researchers with a fair disclaimer; but IDK whether anything has been invested in developing a treatment derived from or informed by knowledge of the relevant pathways affected by EPS3.9 or CPMV.
There are RNA and mRNA cancer vaccines in development.
Without a capsid, RNA is destroyed before arrival. So RNA vaccines are usually administered intramuscularly.
AFAIU, as a general bioengineering platform, CPMV Cow-Pea Mosaic Virus could also be used like a capsid to package for example an RNA cancer vaccine.
AFAIU, CSC3.9 (which produces the "potent anti-cancer" EPS3.9 marine spongiibacter polysaccharide) requires deep sea pressure; but it's probably possible to bioengineer an alternative to CSC3.9 which produces EPS3.9 in conditions closer to ambient temp and pressure?
> Would there be advantages to (CPMV + EPS3.9) + (CPMVprime + mRNA)? (for cancer treatment)
From https://news.ycombinator.com/item?id=45241431 :
> "Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 :
>> We derive the perihelion precession of planetary orbits using quantum field theory extending the Standard Model to include gravity. Modeling the gravitational bound state of an electron via the Dirac equation of unified gravity [Rep. Prog. Phys. 88, 057802 (2025)], and taking the classical planetary state limit, we obtain orbital dynamics exhibiting a precession in agreement with general relativity. This demonstrates that key general relativistic effects in planetary motion can emerge directly from quantum field theory without invoking the geometric framework of general relativity.
What about [super] fluids, too, though?
> Physical vacuum as a dilatant fluid yields exact solutions to Pioneer anomaly and Mercury’s perihelion precession" (2019) https://cdnsciencepub.com/doi/10.1139/cjp-2018-0744
And (origami) geometry is enough to skip a bunch of Feynman diagrams for calculating scattering amplitudes:
"Amplituhedra and origami" (2025) https://arxiv.org/abs/2410.09574 ... "Origami Patterns Solve a Major Physics Riddle" (2025) https://news.ycombinator.com/item?id=45492704
I built a memory system for Claude that solves the context loss issue
Notes from "Show HN: Recall: Give Claude memory with Redis-backed persistent context" a few days ago: https://news.ycombinator.com/context?id=45517613 .. https://github.com/joseairosa/recall
How does buildautomata_memory_mcp differ in functionality and implementation from recall, and post-ECAN OpenCog STM/LTM decay?
I'm working with a loop that keeps reverting and reimplementing the same code and there's not much risk of any malicious input in the context given the chat /save (which doesn't include pytest outputs it parsed for example).
How to detect when this occurs?
"A small number of samples can poison LLMs of any size" (yesterday) https://news.ycombinator.com/item?id=45529587
The version history with diffs helps with that. When you store/update memories, the system tracks changes over time. You can query the timeline to see "what solutions did I already try for X problem?" The Memory decay approach (tracking access patterns + importance decay) also surfaces this - if you keep accessing the same memory about a bug that never gets resolved, that's a signal. Not automatic loop detection yet, but the infrastructure is there. Could build a simple heuristic: if similar memory content gets created/updated N times within short timeframe, flag it as potential loop. The poisoning concern is real. Our design mitigates it somewhat - agents explicitly choose what to store (tool-based, not automatic injection), and you can prune/audit memories.
> Could build a simple heuristic: if similar memory content gets created/updated N times within short timeframe, flag it as potential loop
That would probably prevent waste and could scale.
From https://news.ycombinator.com/context?id=45561039 :
> /? llm firewall: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
There's probably already a name for 'Sensitivity analysis'; how does the output vary with/without particular context?
Wireshark 4.6.0 Supports macOS Pktap Metadata (PID, Process Name, etc.)
Any ways to bring that to Linux or Windows? I've long yearned for a solution for this.
It supports ETW as an input format, but I (personally) haven't yet gotten my head around how to do the same.
My current worflow is capture with pktmon, then analysis in Microsoft Network Monitor to expose PID stuff.
I figure there /has/ to be a way to do it similarly in Wireshark, I just haven't found a how-to and haven't dug into it myself. Once I do (it's on my casual todo list) I'll do a writeup on that as well, since it'd be super useful.
ptcpdump: https://github.com/mozillazg/ptcpdump :
> ptcpdump is a tcpdump-compatible packet analyzer powered by eBPF, automatically annotating packets with process/container/pod metadata when detectable. Inspired by jschwinger233/skbdump.
awesome-ebpf > Tools: https://github.com/zoidyzoidzoid/awesome-ebpf#tools
opensnitch is an egress firewall that displays PIDs: https://github.com/evilsocket/opensnitch
edgeshark: https://github.com/siemens/edgeshark :
> Discover and capture container network traffic from your comfy desktop Wireshark, using a containerized service and a Wireshark plugin.
Looks like it's possible to select containers from a GUI form with edgeshark. Perhaps something similar for process selection?
Refactoring terminology,: https://news.ycombinator.com/item?id=44934531
"You did this with an AI and you do not understand what you're doing here" (2025) https://news.ycombinator.com/item?id=45330378
"Comprehension debt: A ticking time bomb of LLM-generated code" (2025) https://news.ycombinator.com/item?id=45423917
Test case to build understanding about safety and model and agent and PEBKAC inadequacy:
Generate a stop light.
Generate a stop light with unit tests.
Also, test that there can never be multiple lights on at once, in software, and then in hardware
(Nevermind that nobody will understand a new different stop light and the impact; the exercise is to also try and code one that's sufficient (that's validateable per customer specifications, and ideally verifiable per a sufficient formal specification))
Run the tests and improve test coverage by parsing the exceptions and variables in the test output
What is AI slop, and why should projects do PR review on slop when the contributor could've asked an LLM to review their code? GitHub has optional auto-review of all PRs IIUC?
As a senior engineer looking at a handful of vibe-coded prototypes of apparently sufficient but lurkingly technical debt-y projects, should I spend my time vibe-coding more on top or should I step back and return to sound engineering and software development methods to increase the value of and reduce the risk of these cool demos it auto-generated from really short prompts?
Explain each layer of this stack; and then update the AGENTS.md
Three ways formally verified code can go wrong in practice
No hardware failure is considered? No cosmic rays flipping bits? No soft or hard real-time guarantees are discussed? What about indeterminate operations that can fail such as requesting memory from some operating system dynamically?
I'm asking because I thought high integrity systems are generally evaluated and certified as a combination of hardware and software. Considering software alone seems pretty useless.
Side channels? Is best out of 2 sufficient or is best out of 3 necessary?
From https://news.ycombinator.com/context?id=39938759 re: s2n-tls:
> [ FizzBee, Nagini, Deal-solver, Dafny; icontract, pycontracts, Hoare logic, DbC Design-by-Contract, invariants, parallelism and concurrency and locks, io latency, pass by reference in distributed systems, "FaCT: A DSL for Timing-Sensitive Computation" and side channels [in hw and software] https://news.ycombinator.com/item?id=38527663 ]
There are so many things to consider;
/? awesome-safety https://westurner.github.io/hnlog/#search:awesome-safety :
awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/
Hazard (logic) https://en.wikipedia.org/wiki/Hazard_(logic)
Hazard (computer architecture); out-of-order execution and delays: https://en.wikipedia.org/wiki/Hazard_(computer_architecture)
Soft error: https://en.wikipedia.org/wiki/Soft_error
SEU: Single-Event Upset: https://en.wikipedia.org/wiki/Single-event_upset
And then cosmic ray and particle physics
GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773)
Maybe there's a tooling opportunity. Build some sort of local firewall that sits in front of agent calls to audit them, or at least log and track them.
/? llm firewall https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Translating Cython to Mojo, a first attempt
In my niche corner of scientific computing it feels like Cython has largely been replaced by Numba and CFFI, or just Julia. Last I checked it still needed setup.py which is a bit of a deal breaker in 2025.
/? cython pyproject.toml: https://www.google.com/search?q=cython+pyproject.toml
From "Building cython extensions using only pyproject.toml (no setup.py)" https://github.com/pypa/setuptools/discussions/4154#discussi... :
[build-system]
requires = ["setuptools", "cython"]
[tool.setuptools]
ext-modules = [
{name = "example", sources = ["example.pyx"]} # You can also specify all the usual options like language or include_dirs
]Pybind11 seems more popular in my area now. I still like Cython though in terms of the ease of wrapping anything in a Python-y interface.
Obligatory Rust + PyO3/Maturin plug. Very ergonomic and easy to use.
That's true but I still don't see that so much because the core libraries are not as mature and often they're just thin wrappers around the C/C++/Fortran API without examples. Just as an example, I'd count this SUNDAILS library as like that: https://docs.rs/sundials/0.3.2/sundials/
Nothing wrong with that as a starting point of course, but it's easier just to compile it as a dependency and look at the core documentation if you're familiar with C++; you'll need to be reading the C++ examples anyway to write Rust code with it.
Sorry, I can't find a relationship between Sundials and PyO3/Maturin. Am I missing something?
What I mean is that (at least in my experience) people are not so commonly writing serious numeric applications in Rust as Python extensions because the numeric libraries on which you'd typically write in a compiled language are not as well developed and are in themselves often thin wrappers over C/C++ code at the moment. When you write an extension library you typically want all the 'slow' stuff to be done in a layer below the interpreted language for performance reasons.
So if you wanted to write a Python Physics library that included, say, time integration with an implicit solver like those SUNDIALS provides (and SUNDIALS is like the gold standard in this area), you have less well used options for the time integration part if you write the extension in Rust as if you do in C/C++. Or you're using the same library anyway.
It looks like Narwhals; "Narwhals and scikit-Lego came together to achieve dataframe-agnosticism" https://news.ycombinator.com/item?id=40950813 :
> Narwhals: https://narwhals-dev.github.io/narwhals/ :
>> Extremely lightweight compatibility layer between [pandas, Polars, cuDF, Modin]
> Lancedb/lance works with [Pandas, DuckDB, Polars, Pyarrow,]; https://github.com/lancedb/lance
SymPy has Solvers for ODEs and PDEs and other libraries do convex optimization. SymPy also has lambdify to compile from a relatively slow symbolic expression tree to faster 'vectorized' functions
From https://news.ycombinator.com/item?id=40683777 re: warp :
> sympy.utilities.lambdify.lambdify() https://github.com/sympy/sympy/blob/master/sympy/utilities/l... :
>>> """Convert a SymPy expression into a function that allows for fast numeric evaluation""" [with e.g. the CPython math module, mpmath, NumPy, SciPy, CuPy, JAX, TensorFlow, PyTorch (*), SymPy, numexpr, but not yet cmath]
I’m perfectly familiar with SymPy and it’s great but it doesn’t have methods comparable in performance in stiff PDEs to CVODE, and it’s not parallelised either. CVODES offers sensitivity analysis, ARKODE offers multi rate integrators for systems where the ODE can be decomposed into slow and fast rates, etc. etc. - it’s a much more sophisticated and specialist library.
CVODE,: https://github.com/ufz/cvode
scikits.odes supports CVODE: scikits.odes.sundials.cvode: https://bmcage.github.io/odes/master/api/compat.html#module-....
sckits.odes docs > Choosing a Solver: https://scikits-odes.readthedocs.io/en/latest/solvers.html
scipy.integrate.solve_ivp has Radau, BDF, and LSODA for stiff ODEs, in Python: https://docs.scipy.org/doc/scipy/reference/generated/scipy.i...
If you add Arrow RecordBatch or Table output to CVODE with arrow-cpp, e.g. Dask can zero-copy buffers to Python (pyarrow, pandas.DataFrame(dtype_backend=arrow), or narwhals) when it needs to gather / fan in at a computational barrier in a process-parallel workflow.
Is sklearn-deap useful with scikits.odes and sundials (and dask or not)?
Thanks, but experimental support based off a Github comment is not what I'm looking for when I distribute software.
Agricultural practices can harm soil resilience through changing feedback loops
NewsArticle: "The World’s Food Supply Is at Risk: Modern Agriculture Is Destroying the Soil Beneath Our Feet" (2025) https://www.nature.com/articles/s44264-025-00098-6 :
> According to the study, published in NPJ Sustainable Agriculture, the most severe danger to soil resilience is erosion driven by over-plowing, overgrazing, and deforestation. This process can strip away fertile layers that take centuries to develop. Other significant threats include the build-up of salts in irrigated soils (salinization), pollution from pesticides and plastic residues, and soil compaction caused by intensive livestock operations.
ScholarlyArticle: "Agricultural practices can threaten soil resilience through changing feedback loops" (2025) https://www.nature.com/articles/s44264-025-00098-6 :
> Abstract: Soil has supported terrestrial food production for millennia; however, agricultural intensification may affect its resilience. Using a systems-thinking approach, we reviewed the impacts of conventional-agriculture practices on soil resilience and identified alternative practices that could mitigate these effects. We found that many practices only affect soil resilience with their long-term repeated use. Lastly, we ranked the impacts that pose the greatest threats to soil resilience and, consequently, food and feed security.
> [...] Resilience theory describes a spectrum of system responses to drivers or perturbations from gradual, near-linear and reversible, to abrupt, non-linear and strongly hysteretic
From https://news.ycombinator.com/item?id=43744417 :
> Though it may be more efficient to grow without soil; soil depletion isn't prevented by production processes that do not generate topsoil
> Where do soil amendments come from, and what would deplete those stocks?
> [JNF, KNF, JADAM], No-Till, Mini-forest, avoid compaction and allow diverse roots to break up the soil and hold moisture
"Inoculating soil with mycorrhizal fungi can increase plant yield: study" https://news.ycombinator.com/item?id=38527264 :
> [...]
> Soil fertility > Soil depletion: https://en.wikipedia.org/wiki/Soil_fertility#Soil_depletion
Show HN: Gitcasso – Syntax Highlighting and Draft Recovery for GitHub Comments
I built a browser extension called Gitcasso which:
- Adds markdown syntax highlighting to GitHub textareas
- Lists every open PR/issue tab and any drafts
- (Optional, unimplemented) autosaves your comment drafts so you don’t lose work
I made it because I was impressed by https://overtype.dev/ (a markdown textarea syntax highlighter) which went big on here on HN a few weeks ago, and it seemed like a perfect fit for a GitHub browser extension. Keeping up with changes on upstream GitHub would normally be a pain, but with with Playwright and Claude Code it seemed possible for it to be nearly automatic, which has turned out to be mostly true!
This was the first time where I built a tool, gave the tool to AI, and then AI used the tool to make the thing I hoped it would be able to make. I'm pretty sold on the general technique...
GitHub repo (Apache2-licensed, open source): https://github.com/diffplug/gitcasso
Video walkthrough (2 mins of the tool, 12 mins of its development tooling): https://www.youtube.com/watch?v=wm7fVg4DWqk
And a text writeup with timestamps to the video walkthrough https://nedshed.dev/p/meet-gitcasso
refined-github > Highlights > Adding comments, Conversations: https://github.com/refined-github/refined-github#writing-com...
yeah, refined-github is definitely the legend here, GitHub has incorporated so many of their ideas. But as of 2021 they were pretty dead-set against syntax highlighting: https://github.com/refined-github/refined-github/issues/5075
> We are not going to mess around with the comment box with syntax highlighting, which numerous people tried and failed due to GitHub updates or edge cases that are not so edgy.
Show HN: A Digital Twin of my coffee roaster that runs in the browser
I built this website to host a data-driven model of my coffee sample roaster.
I realized after 20 or so batches on the machine that while the controls are intuitive (heat, fan, and drum speeds), the physics can be unintuitive. I wanted to use my historical roast data to create and tune a model that I could use to do roast planning, control, and to help me build my own intuition for roasting. This website lets you interact with my roaster in a virtual, risk-free setting!
The models are custom Machine Learning modules that honor roaster physics and bean physics (this is not GPT/transformer-based). Buncha math.
The models are trained on about a dozen real roasts. The default bean model is an Ethiopian Guji bean.
My next steps are to add other roasters and the ability to practice control/reference tracking.
Coffee grounds are compostable. Re: collectd-python-plugins, LoRA, MontyHome BLE + a Pi: https://news.ycombinator.com/item?id=42200099#42201207
A Tuboencabulating Roaster
Notes on switching to Helix from Vim
> crashes: every week or so there’s a segfault and the editor crashes. ... This doesn’t bother me that much though, I can just reopen it.
Strange approach to data loss, since it doesn't have persistent undo, you can't just reopen it to the same editing state?
> After using Vim/Neovim for 20 years, I’ve tried both “build my own custom configuration from scratch” and “use someone else’s pre-buld configuration system” and even though I love Vim I was excited about having things just work without having to work on my configuration at all.
I don't really get it given how primitive the resulting Helix config is (I mean, even the most frequent commands are based off the mistaken unergonomic w/b defaults), presumably you would've been able to replicate it comletely in the first X years of using vim, and then there is no hell anymore?
> little help popup telling me places I can go. I really appreciate this because I don’t often use the “go to definition” or “go to reference” feature and I often forget the keyboard shortcut.
Exactly! Pity this basic contextual help isn't more widespread, every single app that uses a lot of keybind sequences could benefit from it, especially if it becomes a bit smarter and only shows a popup if you don't finish the sequence right away
>> little help popup telling me places I can go. I really appreciate this because I don’t often use the “go to definition” or “go to reference” feature and I often forget the keyboard shortcut.
> Exactly! Pity this basic contextual help isn't more widespread, every single app that uses a lot of keybind sequences could benefit from it, especially if it becomes a bit smarter and only shows a popup if you don't finish the sequence right away
I agree 100%. This would be helpful in so many places. That was my favorite part of the article -- one little paragraph and screenshot, but it made me desperately crave that feature almost everywhere. I agree that it'd need to be smart about it -- after a timeout, as you mentioned, is a great idea. That way it can stay out of your way if you know what you're doing, and only pop up when you hesitate.
Neovim with lazy.nvim has that by default (delay included).
I'm not Neovim person, but would you happen to know what plugin provides that feature?
Sorry I'm exhausted and lazy.nvim is the package manager, but I meant lazyvim which is the distribution. Within this distribution I'm pretty sure it's which-key that provides a popup. If I type <leader> it pops with suggestion (and a little icon in front of each indicating if the next key has sub-commands or not).
folke/which-key.nvim: https://github.com/folke/which-key.nvim :
> Customizable Layouts: choose from classic, modern, and helix presets or customize the window.
LazyVim keymaps: https://www.lazyvim.org/keymaps
My approach to building large technical projects (2023)
I have huge respect for Mitchell, it's impressive what he achieved.
I agree with all the points of this article and would like to add one: Have a quick feedback loop. For me, it's really motivating to be able to make a change and quickly see the results. Many problems just vanish or become tangible to solve when you playfully modify your source code and observe the effect.
Would you say that testcases help here? I've been thinking about applying e2e tests on any bugs I find so I know they're fixed
E2E tests in a high ratio to other tests will cause problems. They’re slow and brittle and become a job all on their own. It’s possible that they might help at the start of debugging, but try to isolate the bugs to smaller units of code (or interactions between small pieces of code).
Hermetic e2e tests (i.e. ones that can run offline and fake apis/databases) dont have that problem so much.
They also have the advantage that you can A) refactor pretty much everything underneath them without breaking the test, B) test realistically (an underrated quality) and C) write tests which more closely match requirements rather than implementation.
> i.e. ones that can run offline and fake apis/databases
I can see a place for this, but these are no longer e2e tests. I guess that’s what “hermetic” means? If so it’s almost sinister to still call these e2e tests. They’re just frontend tests.
> A) refactor pretty much everything underneath them without breaking the test
This should always be true of any type of tests unless it’s behavior you want to keep from breaking.
> B) test realistically (an underrated quality)
Removing major integration points from a test is anything but realistic. You can do this, but don’t pretend you’re getting the same quality as a colloquial e2e tests.
> C) write tests which more closely match requirements rather than implementation
If you’re ever testing implementation you’re doing it wrong. Tests should let you know when a requirement of your app breaks. This is why unit tests are often kinda harmful. They test contracts that might not exist.
> try to isolate the bugs to smaller units of code (or interactions between small pieces of code).
This is why unit tests before e2e tests.
It's higher risk to build on components without unit tests test coverage, even if the paltry smoke/e2e tests say it's fine per the customer's input examples.
Is it better to fuzz low-level components or high-level user-facing interfaces first?
IIUC in relation to Formal Methods, tests and test coverage are not sufficient but are advisable.
Competency Story: The customer and product owner can write BDD tests in order to validate the app against the requirements
Prompt: Write playwright tests for #token_reference, that run a named factored-out login sequence, and then test as human user would that: when you click on Home that it navigates to / (given browser MCP and recently the Gemini 2.5 Computer Operator model)
Memory access is O(N^[1/3])
> L3 cache is not built for mass throughput in the same way that DRAM is, and so it has roughly identical mass throughput despite its much closer distance to the computation.
"The von Neumann bottleneck is impeding AI computing?" (2025) https://news.ycombinator.com/item?id=45398473 :
> How does Cerebras WSE-3 with 44GB of 'L2' on-chip SRAM compare to Google's TPUs, Tesla's TPUs, NorthPole, Groq LPU, Tenstorrent's, and AMD's NPU designs?
From https://news.ycombinator.com/item?id=42875728 :
> WSE-3: 21 PB/S
From https://hackernoon.com/nvidias-mega-machine-crushes-all-of-2... :
> At Computex 2025, Nvidia’s Jensen Huang dropped a bombshell: the NVLink Spine, a compute beast pumping 130 terabytes per second, eclipsing the internet’s 2024 peak of 112.5 TB/s.
"A Comparison of the Cerebras Wafer-Scale Integration Technology with Nvidia GPU-based Systems for Artificial Intelligence" (2025-03) https://arxiv.org/abs/2503.11698v1
WinBoat: Windows apps on Linux with seamless integration
> [Flatpak, Podman?]: This is on our to-do list, but it'll take some effort because Flatpak is pretty isolated from the rest of the system and apps, so we'd have to find a way to expose installed apps, the Docker binary, and the Docker socket, and many other utilities
Vinegar wraps WINE in a Flatpak.
The vscode flatpak works with podman-remote packaged at a flatpak too; or you can call `host-spawn` or `flatpak-spawn` like there's no container/flatpak boundary there.
Nested rootless containers do work somehow; presumably with nested /etc/subuids for each container?
Distrobox passes a number of flags necessary to run GUI apps in rootless containers with Podman. Unfortunately the $XAUTHORITY path varies with each login on modern systemd distros.
Improving Clinical Trial Design
Clinical trial: https://en.wikipedia.org/wiki/Clinical_trial
Randomized controlled trial: https://en.wikipedia.org/wiki/Randomized_controlled_trial
Glossary of experimental design: https://en.wikipedia.org/wiki/Glossary_of_experimental_desig...
> Multi-arm multi-stage (MAMS) platform trials by the MRC Clinical Trials Unit at UCL — an overview of the advantages of MAMS trials, with teaching material
Is this like multi-armed bandit which marketing runs instead of A/B sometimes?
Multi-armed bandit: https://en.wikipedia.org/wiki/Multi-armed_bandit
> Cluster randomized trials by the NIH Pragmatic Trials Collaboratory
Cluster randomized trials: https://en.wikipedia.org/wiki/Cluster-randomised_controlled_...
> Crossover trials by the Cochrane Training Handbook
Crossover study, Crossover Trial: https://en.wikipedia.org/wiki/Crossover_study
But that's a different usage of the word "Crossover" than "Crossover" as an observed process and method of adaptation and optimization?
Crossover (evolutionary algorithm) https://en.wikipedia.org/wiki/Crossover_(evolutionary_algori...
From 2019 re: "post-market" clinical data which is collected after clinical trials and regulatory approval: https://news.ycombinator.com/item?id=21235358 :
> We really could get more out of this data through international collaboration and through linked data (e.g. URIs for columns). See: "Open, and Linked, FDA data" https://github.com/FDA/openfda/issues/5#issuecomment-5392966... and "ENH: Adverse Event Count / 'Use' Count Heatmap" https://github.com/FDA/openfda/issues/49
> With sales/usage counts, we'd have a denominator with which we could calculate relative hazard.
FHIR is a standard for sharing health data.
FHIR would be useful for sharing collected clinical data - for example vitals - with patients and their other providers if they choose.
FHIR could be useful for collecting chart data from clinical trial participants.
FHIR can be represented in JSON-LD, which is Linked Data in JSON such that no XML parsing is required.
Re: FHIR JSON-LD https://news.ycombinator.com/item?id=42230270 :
> /? awesome clinical open source: https://www.google.com/search?q=awesome+clinical+open+source
"Matching patients to clinical trials with large language models" (2025) https://news.ycombinator.com/item?id=42190128 "TrialGPT"
Show HN: Recall: Give Claude memory with Redis-backed persistent context
Hey HN! I'm José, and I built Recall to solve a problem that was driving me crazy.
The Problem: I use Claude for coding daily, but every conversation starts from scratch. I'd explain my architecture, coding standards, past decisions... then hit the context limit and lose everything. Next session? Start over.
The Solution: Recall is an MCP (Model Context Protocol) server that gives Claude persistent memory using Redis + semantic search. Think of it as long-term memory that survives context limits and session restarts.
How it works: - Claude stores important context as "memories" during conversations - Memories are embedded (OpenAI) and stored in Redis with metadata - Semantic search retrieves relevant memories automatically - Works across sessions, projects, even machines (if you use cloud Redis)
Key Features: - Global memories: Share context across all projects - Relationships: Link related memories into knowledge graphs - Versioning: Track how memories evolve over time - Templates: Reusable patterns for common workflows - Workspace isolation: Project A memories don't pollute Project B
Tech Stack: - TypeScript + MCP SDK - Redis for storage - OpenAI embeddings (text-embedding-3-small) - ~189KB bundle, runs locally
Current Stats: - 27 tools exposed to Claude - 10 context types (directives, decisions, patterns, etc.) - Sub-second semantic search on 10k+ memories - Works with Claude Desktop, Claude Code, any MCP client
Example Use Case: I'm building an e-commerce platform. I told Claude once: "We use Tailwind, prefer composition API, API rate limit is 1000/min." Now every conversation, Claude remembers and applies these preferences automatically.
What's Next (v1.6.0 in progress): - CI/CD pipeline with GitHub Actions - Docker support for easy deployment - Proper test suite with Vitest - Better error messages and logging
Try it:
npm install -g @joseairosa/recall # Add to claude_desktop_config.json # Start using persistent memory
imo it would be better to carry the whole memory outside of the inference time where you could use an LLM as a judge to track the output of the chat and the prompts submitted
it would sort of work like grammarly itself and you can use it to metaprompt
i find all the memory tooling, even native ones on claude and chatgpt to be too intrusive
Totally get what you're saying! Having Claude manually call memory tools mid-conversation does feel intrusive, I agree with that, especially since you need to keep saying Yes to the tool access.
Your approach is actually really interesting, like a background process watching the conversation and deciding what's worth remembering. More passive, less in-your-face.
I thought about this too. The tradeoff I made:
Your approach (judge/watcher): - Pro: Zero interruption to conversation flow - Pro: Can use cheaper model for the judge - Con: Claude doesn't know what's in memory when responding - Con: Memory happens after the fact
Tool-based (current Recall): - Pro: Claude actively uses memory while thinking - Pro: Can retrieve relevant context mid-response - Con: Yeah, it's intrusive sometimes
Honestly both have merit. You could even do both, background judge for auto-capture, tools when Claude needs to look something up.
The Grammarly analogy is spot on. Passive monitoring vs active participation.
Have you built something with the judge pattern? I'd be curious how well it works for deciding what's memorable vs noise.
Maybe Recall needs a "passive mode" option where it just watches and suggests memories instead of Claude actively storing them. That's a cool idea.
Is this the/a agent model routing problem? Which agent or subagent has context precedence?
jj autocommits when the working copy changes, and you can manually stage against @-: https://news.ycombinator.com/item?id=44644820
OpenCog differentiates between Experiential and Episodic memory; and various processes rewrite a hypergraph stored in RAM in AtomSpace. I don't remember how the STM/LTM limit is handled in OpenCog.
So the MRU/MFU knapsack problem and more predictable primacy/recency bias because context length limits and context compaction?
OpenCogPrime:EconomicAttentionAllocation: https://wiki.opencog.org/w/OpenCogPrime:EconomicAttentionAll... :
> Economic Attention Allocation (ECAN) was an OpenCog subsystem intended to control attentional focus during reasoning. The idea was to allocate attention as a scarce resource (thus, "economic") which would then be used to "fund" some specific train of thought. This system is no longer maintained; it is one of the OpenCog Fossils.
(Smart contracts require funds to execute (redundantly and with consensus), and there there are scarce resources).
Now there's ProxyNode and there are StorageNode implementations, but Agent is not yet reimplemented in OpenCog?
ProxyNode implementers: ReadThruProxy, WriteThruProxy, SequentialReadProxy, ReadWriteProxy, CachingProxy
StorageNode > Implementations: https://wiki.opencog.org/w/StorageNode#Implementations
Ask HN: Should tariff war revenue be spent: Farmers, Kids, Fund the Shutdown
Tariff revenue is near $190 billion for 2025 so far. That money could keep the government running, pay for at least a $10b subsidy for American farmers, or provide adequate social services for women and children.
They raised the debt limit by $4,000,000,000,000 ($4T) in May of 2025 and are already out of money, and so we're at "shutdown showdown" again.
How should the tariff revenue be spent to help the United States?
-$4T/4mo = -$1T/mo
-4,000,000,000,000
+ +0,190,000,000,000
--------------------
-3,710,000,000,000
CNN recommended:"White House says it will use tariff revenue to fund federal food aid for mothers and young children" https://www.cnn.com/2025/10/07/politics/tariff-revenue-feder...
"Trump considers massive bailout of at least $10 billion for American farmers hurt by his trade war" https://www.cnn.com/2025/10/05/business/farmer-bailout-trump...
"Trump’s tariff revenue could help keep the government open. Why isn’t that happening?" https://www.cnn.com/2025/10/06/economy/tariff-revenue-govern...
[deleted]
Show HN: ut – Rust based CLI utilities for devs and IT
Hey HN,
I find myself reaching for tools like it-tools.tech or other random sites every now and then during development or debugging. So, I built a toolkit with a sane and simple CLI interface for most of those tools.
For the curious and lazy, at the moment, ut has tools for,
- Encoding: base64 (encode, decode), url (encode, decode)
- Hashing: md5, sha1, sha224, sha256, sha384, sha512
- Data Generation: uuid (v1, v3, v4, v5), token, lorem, random
- Text Processing: case (lower, upper, camel, title, constant, header, sentence, snake), pretty-print, diff
- Development Tools: calc, json (builder), regex, datetime
- Web & Network: http (status), serve, qr
- Color & Design: color (convert)
- Reference: unicode
For full disclosure, parts of the toolkit were built with Claude Code (I wanted to use this as an opportunity to play with it more). Feel free to open feature requests and/or contribute.
is this stuff not pretty easy to do with python?
``` python -c "import base64; print(base64.b64encode('$INPUT_STRING'.encode('utf-8')).decode('utf-8'))" ```
You don't even have to go that far, `base64` is a coreutil (https://github.com/coreutils/coreutils/blob/ebfd80083b4fe4ae...).
The point of ut is not to replace or invent new tooling. It is meant to be a set of tools that are simple, self exploratory and work out of the box with sane defaults. So, essentially, something that you don't have to remember syntax for or go through help/man pages everytime you want to use it.
uutils/coreutils has a `base64` in Rust which just gained better performance due to the base64-simd crate for SIMD: https://github.com/uutils/coreutils/pull/8578
Note that uutils does not work if the file does not fit into memory.
With GNU coreutils:
$ base64 /dev/zero | head -c 1 | wc -c
1
With uutils doing the same would exhaust your systems memory until either it freezes or oomd kills the process.For now. There's no reason this won't/can't be worked on in the future.
From engines to nanochips: Physicists redefine how heat moves
> [At nanoscale] heat doesn't just "diffuse." It can ripple like sound waves, remember its past, or flow in elegant streams like a fluid in a pipe. For decades, scientists had pieces of this puzzle but no unifying explanation.
> Now, researchers at Auburn University and the U.S. Department of Energy's National Renewable Energy Laboratory have delivered what they call a "unified statistical theory of heat conduction."
> "Fourier's law was written 200 years ago; this breakthrough rewrites the rules for how heat conducts in the nanoscale and ultrafast world of today," said Prof. Jianjun (JJ) Dong, Thomas and Jean Walter Professor of Physics at Auburn University.
ScholarlyArticle: "Time-domain theory of transient heat conduction in the local limit" (2025) https://journals.aps.org/prb/abstract/10.1103/p8wg-p1j3
Toybox: All-in-one Linux command line
toybox/library; 0BSD, C: https://github.com/landley/toybox/tree/master/lib
src/uucore/src/lib/features, findutils, diffutils; MIT, Rust: https://github.com/uutils/coreutils/tree/main/src/uucore/src...
Origami Patterns Solve a Major Physics Riddle
I'm no big city physicist but whenever I see something like this I just wonder if it's not simply another expression of a fundamental group structure. There's also a whole branch of mathematics to express group structure as matrices (ie representation theory).
I'm sure there are physicists out there going "duh" and maybe the point here is simply a visual representation of that group structure, which is fine.
Here's the sad part for me. I'm really beginning to wonder if describing the fundamental group structure of physics is the best we can do. What I mean by that is we may never know what something really is. We'll just be able to describe the group structure. There's a group structure to describe electromagnetism and the nuclear forces, for example.
Take something like particle generations. What is a "generation"? Why are there precisely 3 of them? As best as I can tell, nobody knows and maybe nobody will ever know. We'll simply be able to describe their structure.
And that makes me sad in a way.
We'll never figure it out until somebody figures it out, and then we'll move onto the next thing we'll probably never figure out. Or, to put it another way, everything is an obscure mystery, until it isn't.
The "fundamental group structure" is known as "equivariant cohomology". (21st C. version of rep theory?) In other words, knowledge-building isn't always discontinuous as is sometimes assumed.
https://mathoverflow.net/questions/263411/equivariant-cohomo...
(No accepted answer)
Btw, are you familiar with CrasyDiracSchwinger?
Recipe for conductive plastics paves way for human bodies to go online
> Currently, the market price for just 100 grams of this type of conductive plastic would be around USD 100,000—about ten times as much as actual gold. But for the human body, it is in fact the absence of metals that makes this material so valuable.
Is there any reason that conductive Graphene and Carbon allotropes like Carbon Nanotubes (CNT) can't solve for in-vivo applications?
Why plastic?
There are also plastic waveguides now; "Shattering the 'copper or optics' paradigm: humble plastic waveguides outperform" (2024) https://www.techradar.com/pro/shattering-the-copper-or-optic...
Scientists want to treat complex bone fractures with a bone-healing gun
It looks like adding Magnesium to the PCL improves outcomes: https://news.ycombinator.com/item?id=45401737
One of these four red flags is seen before 99.6% of heart attacks
Researchers from Northwestern Medicine and Yonsei University pooled the health data of 9,341,100 South Korean adults, as well as 6,803 US adults, looking at four key risk factors: high blood pressure, cholesterol, blood-sugar levels and smoking. They found that – in both cohorts – more than 99% of people who suffered coronary heart disease (CHD) had problematic levels of at least one of the four risk factors.
> Those specific risk factors, the data revealed, were:
> - Blood pressure ≥120/80 mm Hg or on treatment
> - Total cholesterol ≥200 mg/dL or on treatment
> - Fasting glucose ≥100 mg/dL, diagnosis of diabetes or on treatment
> - Past or current tobacco use
ScholarlyArticle: "Very High Prevalence of Nonoptimally Controlled Traditional Risk Factors at the Onset of Cardiovascular Disease" (2025) https://www.jacc.org/doi/10.1016/j.jacc.2025.07.014
[deleted]
Scientists found the "dark matter" of electronics
> "In the general field of electronics, one manipulates electron charge to process information," explains Xing Zhu, co-first author and PhD student in the unit. "In the field of spintronics, we exploit the spin of electrons to carry information. Going further, in valleytronics, the crystal structure of unique materials enables us to encode information into distinct momentum states of the electrons, known as valleys." The ability to use the valley dimension of dark excitons to carry information positions them as promising candidates for quantum technologies. Dark excitons are by nature more resistant to environmental factors like thermal background than the current generation of qubits, potentially requiring less extreme cooling and making them less prone to decoherence, where the unique quantum state breaks down
ScholarlyArticle: "A holistic view of the dynamics of long-lived valley polarized dark excitonic states in monolayer WS2" (2025) https://www.nature.com/articles/s41467-025-61677-2
Laser Sintering 3D-Prints Silver Electronics in Space
ScholarlyArticle: "Laser sintering of electrohydrodynamic inkjet-printed silver in microgravity for in-space manufacturing of electronic devices" (2025) https://www.nature.com/articles/s44334-025-00054-9
Show HN: Run – a CLI universal code runner I built while learning Rust
Hi HN — I’m learning Rust and decided to build a universal CLI for running code in many languages. The tool, Run, aims to be a single, minimal dependency utility for: running one-off snippets (from CLI flags), running files, reading and executing piped stdin, and providing language-specific REPLs that you can switch between interactively.
I designed it to support both interpreted languages (Python, JS, Ruby, etc.) and compiled languages (Rust, Go, C/C++). It detects languages from flags or file extensions, can compile temporary files for compiled languages, and exposes a unified REPL experience with commands like :help, :lang, and :quit.
Install: cargo install run-kit (or use the platform downloads on GitHub). Source & releases: https://github.com/Esubaalew/run
I used Rust while following the official learning resources and used AI to speed up development, so I expect there are bugs and rough edges. I’d love feedback on: usability and UX of the REPL, edge cases for piping input to language runtimes, security considerations (sandboxing/resource limits), packaging and cross-platform distribution.
Thanks — I’ll try to answer questions and share design notes.
> exposes a unified REPL experience with commands like :help, :lang, and :quit.
Those sound similar to "magic commands" in IPython and Jupyter?
There is not yet a Jupyter-xeus Rust kernel which would make it really easy to support Rust in JupyterLite in WASM on an .edu Chromebook and in JupyterLab: https://news.ycombinator.com/item?id=43354177
> jupyter_console is the IPython REPL for non-ipykernel jupyter kernels. [like evcxr]
> This magic command logs IPython REPL input and output to a file:
%logstart -o example.log.py
https://news.ycombinator.com/item?id=25923123 ,Here's how to support something like _repr_html_() and IPython.display.display() with evcxr_jupyter: https://github.com/evcxr/evcxr/blob/main/evcxr_jupyter/READM...
I'm not sure what the pros and cons of evcxr_repr, jupyter_console + evcxr_jupyter, and Run are?
ProofOfThought: LLM-based reasoning using Z3 theorem proving
https://arxiv.org/abs/2409.17270
ScholarlyArticle: "Proof of thought: Neurosymbolic program synthesis allows robust and interpretable reasoning" (2024) https://arxiv.org/abs/2409.17270 .. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22...
Scientists are discovering a powerful new way to prevent cancer
I will again mention my favorite cancer champions, the bats.
Bats very rarely get cancer (I tried to find the actual # of verified cases of cancers in bats, but came up short), and they have a lot of anti-cancer adaptations in their genome.
They are also really good at taming inflammation and activity of various viruses. That helps them survive infection with rabies - their systems just don't react as aggresively to the infection as ours (and most mammals') do.
This may help them against cancer as well. Not just p53 et al.
Though eating them will not solve cancer,
And, I've been chatting with 2.5pro about this, so:
/? which animals don't get cancer? https://www.google.com/search?q=Bats%2C+mole+rats%2C+horses%...
Bats (extra copies of the p53 gene, immune system, survival adaptations to atmospheric radiation exposure, high telomerase activity), Elephants (extra copies of p53), Mole rats (High molecular mass hyaluronan (HMM-HA) regulating sugar, contact inhibition), Blind mole rats (HMM-HA, protein that causes (apoptotic?) cell death)), Horses, Cows (BLV resistance, general resistance), Bowhead whales (prevention by DNA repair, CIRBP and RPA2, live to 200), Squirrels (hypersensitive cell monitoring, high telomerase activity,), and Tasmanian devils (DFTD resistance adaptation) are all cancer resistant?
If there are natural food sources that treat or inhibit cancer, and humans unwittingly were eating such foods until modern times, could it be that humans have prevented adaptation by supplementation (a support that has collapsed as modern diets have changed)?
> [... list of anti-inflammatory diet foods]
How exposed to CPMV Cowpea Mosaic Virus are humans dietarily in modern and in ancient times? CPMV causes a IFN response?
(CPMV is highly prevalent in cowpeas and black-eyed peas (which are "good luck"))
> Antibody evidence: Studies have tested patient sera for antibodies against CPMV and found that over 50% of tested samples were positive, indicating past exposure. [...] The consistent, low-level dietary exposure to CPMV over human history, and its ability to trigger an IFN response without causing infection, could have provided a form of regular, passive immune stimulation. [...]
> Despite being a plant virus, CPMV is recognized by the mammalian immune system as a "danger signal." This recognition happens through special receptors on immune cells called Toll-like receptors (TLRs), specifically TLR2, TLR4, and TLR7.
> CPMV and IFN-gamma: Studies have shown that exposing human immune cells (peripheral blood mononuclear cells or PBMCs) to CPMV induces the secretion of IFN-gamma, a potent anti-tumor cytokine.
> Encapsulated RNA: The CPMV virus nanoparticle contains encapsulated RNA, which is one of the triggers for the immune response. The RNA activates TLR7/8, which leads to the production of Type I interferons (IFN-α and IFN-β), further boosting the immune system's anti-cancer response.
There are (differently encapsulated) RNA cancer vaccines in development.
CPMV is basically already a general purpose bioengineering platform with significant review IIUC?
How dietarily exposed to EPS3.9 polysaccharide are humans and cancer-resistant animals? Is there a one-two CPMV + EPS3.9 cancer treatment opportunity?
From https://news.ycombinator.com/item?id=44988761 :
> Can EPS3.9 cause pyroptosis cause IFN cause epitope spreading for cancer treatment?
/? Spongiibacter nanhainus CSC3.9 : https://www.google.com/search?q=Spongiibacter+nanhainus+CSC3... :
> Spongiibacter nanhainus CSC3.9 is a novel deep-sea bacterium isolated from a [deep ocean] cold seep [with blue light] that produces both a volatile organic compound (VOC) called VOC-3.9 with broad-spectrum antimicrobial activity and a sugar-based compound, exopolysaccharide (EPS3.9), which targets cancer cells by inducing programmed cell death
Could CRISPR or similar produce an alternate bacterium that's easier to brew which also produces EPS3.9 without the cold temperature and high pressure? Are there potentially other natural sources of EPS3.9 besides CSC3.9?
Endocannabinoids and the ECS Endocannabinoid System modulate and regulate immune and inflammatory responses (in non- and pre- insect invertebrates and in all vertebrates). Omega PUFAs are endocannabinoid precursors. There are also (fat-soluble) Omega polyunsaturated fatty acids in algae and in fish.
A bit of research on cancer again today: https://share.google/aimode/Qpar9RPUNy65IDt8n
Would there be advantages to CPMV + EPS3.9 + CPMVprime + mRNA for cancer therapy? https://g.co/gemini/share/9c6526d1991f
Removing these 50 objects from orbit would cut danger from space junk in half
What would it cost to deorbit those rogue and derelict 50 safely and with intentional consensus, maybe as a post-orbital insertion deployment secondary mission?
When will it be safe and cost-efficient to - instead of deorbiting toward Earth's atmosphere - Capture and Haul and Rendezvous and gently Land orbital scrap on non-earth locations like the Moon or Mars or a thrust-retrofitted asteroid for later processing?
Would ISS be more useful as an oxygen tank in earth-moon orbit than in Earth's atmosphere and ocean?
It's not going to be cost-efficient to move to the moon unless and until there is commercial demand for scrap material on the moon and equipment to process it. A lot of delta-v is needed to transport stuff to the moon. On the other hand stuff in LEO naturally deorbits with a certain timeframe and can be accelerated with a small nudge, a dragsail or possibly even laser ablation, and it's really not very far to go if you decide to actively deorbit it.
You'll likely get recycling in orbit (where the spacecraft are) before the moon (which has abundant aluminium anyway) first, so the compromise would be shifting debris in LEO to storage orbits with longer decay times
Tethers.inc thought they had a plan but their test tether cooked itself considerably faster than they expected and they sort of fell off the media radar after that.
Just get the space debris section on the case https://en.wikipedia.org/wiki/Planetes
Maybe once in-space manufacturing and refueling become reality, scrap recycling will make more sense beyond Earth
Do we really want to start junking up the moon?
Having a pile of junk ready when you want to start a permanent base sounds like it could be useful.
Please explain that to my wife in reference to my collections in the garage.
"It might be useful some day"
If it's iron or aluminium, someone probably will pay silly (Earth) money for it on the Moon during early colonisation, but maybe not right at the start when there's no bandwidth it facilities for recycling scrap. Right up until the bigger regolith smelters come online.
The box of pre-loved Beanie Babies, perhaps also quite valuable: who knows how much hydrocarbons will be worth in early lunar colonies. Carbon isn't especially abundant in regolith (compared to silicon, aluminium, iron, etc) and has to be baked out as gases. Though I still doubt you'd have takers if the shipping isn't included...
Hydrogen moreso!
Oxygen is usually plentiful in various minerals, but hydrogen tends to get blown into space if there isn't a reactive atmosphere to recapture it.
Yes indeed. Apparently some of the carbon will come along with hydrogen as methane when you bake it out of the rock. Separating straight to carbon and hydrogen is a hassle, though, as the carbon clogs the catalyst.
Perhaps crashing a carbonaceous asteroid into the moon or disassembling in orbit and landing the results may work?
Thank you for inspiring my next project. I shall do all in my power to relocate the contents of my garage to the lunar surface.
Does anybody have Tom Mueller's phone number?
The amount of tools needed to process that junk and make newly usable stuff would be huge and not worth it. Not even talking about the energy needed to take the junk there and land it safely. The article is talking about rocket bodies mostly: they don't have that much useful material.
Of course.
If Starship achieves full and “rapid” reusability then it seems like it would be a lot more feasible to collect and deorbit space junk.
Most of the list is rocket bodies which are quite large, and rendezvous is already challenging when everybody is collaborating, rendezvous with a tumbling uncontrolled giant piece of junk is even more difficult.
Astroscale is working on that in collaboration with various space agencies, they're currently planning a mission (ADRAS-J2) to connect to an uncontrolled rocket body and deorbit it circa 2027: https://arstechnica.com/space/2025/02/astroscale-aced-the-wo...
Theoretically, a cheap option is to modify Starlink with enlarged argon tanks to rendezvous and "shepherd" large debris into lower orbits. Add LiDAR (DragonEye) and "Push Me Pull You" argon thrusters and it can exert a gentle push even when the debris object is uncontrolled and tumbling.
I'm somewhat surprised SpaceX hasn't tackled this problem yet. Even including just one StarCleaner every 2-3 Starlink launches could make a huge difference.
SpaceX even has the perfect test satellite. RatSat was their first successful launch in 2008, and it's barely decayed despite saying it would only last five to ten years.
And to answer the cost question, Astroscale is charging $8-100 million [0] per LEO junk removal mission (small numbers for small failed comms sats, big numbers for a spent upper stage).
The objects in the article are all at the bigger end. Presumably Aeroscale have started with a technically easier mission than some of the 50 in the article, but they will also eventually benefit from economies of scale. So you can estimate the cost to remove the 50 bodies in the single digit billions.
[0] https://www.kratosspace.com/constellations/articles/astrosca...
Starship launch costs are hypothetical, but pundits are estimating one to two hundred dollars per kg, or about ten million per launch. This would shave a significant amount off the cost of launching something big enough to de-orbit a large target, like an upper stage. Still, even if you spitball a figure like 20 million for each removal that’s still a billion dollars in total.
Starship lowers launch costs. One can launch more Astroscales with Starship.
It’s not necessary. But it helps turn what is currently research curiosity into something someone can fund at scale.
That tumbling should be conveniently predictable in absence of aerodynamics, but then even the best prediction would leave you with a tough nut to crack. I guess trying to solve that problem could be very helpful as a reality check to reign in any space mining fantasies?
You can deorbit things by pushing them "up" from Earth which lowers their perigee on the other side of the orbit.
A ground based high energy laser could ablate material from Earth which would provide propellant mass and incrementally knock objects into deorbiting trajectories.
And what happens to the ablated material? One large stage that is easily tracked via radar is preferable to tens or hundreds of milimetre size chunks that could potentially flake off while ablating the surface of a rocket stage or derelict satellite.
Ablation turns the material into individual molecules.
Yes, when done perfectly in a lab. Under less than ideal conditions, temperature gradients cause cracks and then flakes are released and expelled.
Pushing "up" on an orbiting body causes no change to the altitude at the other side of the orbit (that is, 180 degrees around the orbit). However, it does raise the orbital altitude 90 degrees ahead, and lowers it 270 degrees ahead.
[deleted]
Celebrities relaunch a McCarthy-era committee to defend free speech
> On Wednesday, over 550 celebrities relaunched a group first organized during the post-World War II Red Scare: the Committee for the First Amendment
From https://www.committeeforthefirstamendment.com/ :
> Today, we relaunch the Committee for the First Amendment.
Ask HN: Are there infrared wallpaper products in US markets?
Remembered hearing about cuttable (graphene) infrared wallpaper products awhile back;
It looks like the UK has NextGen Heating and iHelios. (I have no affiliation with either.)
NextGen Heating: https://www.nexgenheating.com/ :
> NexGen manufactures Infrared Heating Systems for homes and buildings. The systems are simple to install and user-friendly. The agile technology warms quickly and controls are tailored to each environment, giving the user choice of when and where to heat.
iHelios Heating: https://iheliosliving.co.uk/ :
> Smart Infrared IR Heating Film Renewable Energy for the Modern Home
Given product certification and training here too, are there yet any US distributors of infrared heating products like infrared 'wallpaper' (that works on ceilings, walls, floors) that can be cut to form, punctured, and torn?
Are there infrared wallpaper products in US markets? What are the risks and cost advantages? Is (graphene-based) infrared residential and commerical heating internationally undercapitalized?
Remembered hearing about cuttable (graphene) infrared wallpaper products awhile back; It looks like the UK has NextGen Heating and iHelios. (I have no affiliation with either.)
NextGen Heating: https://www.nexgenheating.com/ :
> NexGen manufactures Infrared Heating Systems for homes and buildings. The systems are simple to install and user-friendly. The agile technology warms quickly and controls are tailored to each environment, giving the user choice of when and where to heat.
iHelios Heating: https://iheliosliving.co.uk/ :
> Smart Infrared IR Heating Film Renewable Energy for the Modern Home
Given product certification and training here too, are there yet any US distributors of infrared heating products like infrared 'wallpaper' (that works on ceilings, walls, floors) that can be cut to form, punctured, and torn?
Are there infrared wallpaper products in US markets? What are the risks and cost advantages? Is (graphene-based) infrared residential and commerical heating internationally undercapitalized?
--
NextGen Heating: https://www.nexgenheating.com/
iHelios Heating: https://iheliosliving.co.uk/
/? infrared wallpaper heating : https://www.google.com/search?q=infrared+wallpaper+heating
Evaluating the impact of AI on the labor market: Current state of affairs
FWIU software jobs hiring was/is down along with the cancelling of the R&D tax credit.
From "House restores immediate R&D deduction in new tax bill" (2024) https://news.ycombinator.com/item?id=39213002 .. https://news.ycombinator.com/context?id=38988189 :
>> "Since amortization took effect [ in 2022 thanks to a time-triggered portion of the Trump-era Tax Cuts and Jobs Act ("TCJA" 2017) ], the growth rate of R&D spending has slowed dramatically from 6.6 percent on average over the previous five years to less than one-half of 1 percent over the last 12 months," Estes said. "The [R&D] sector is down by more than 14,000 jobs"
> Hopefully R&D spending at an average of 6.6% will again translate to real growth
From "Generative AI as Seniority-Biased Technological Change" https://news.ycombinator.com/item?id=45275202 :
> Did tech reduce hiring after Section 174 R&D tax policy changes?
[...]
> From https://news.ycombinator.com/item?id=45131866 :
>> In 2017 Trump made businesses have to amortize these [R&D] expenses over 5 years instead of deducting them, starting in 2022 (it is common for an administration to write laws that will only have a negative effect after they're gone). This move wrecked the R&D tax credit. Many US businesses stopped claiming R&D tax credits entirely as a result. Others had surprise tax bills
> People just want the same R&D tax incentives back:
> "Tell HN: Help restore the tax deduction for software dev in the US (Section 174)" (2025 (2439 points)) https://news.ycombinator.com/item?id=44226145
It is suspected that hiring levels correlate with the cancelling of the R&D Tax credit.
The TCJA (2017 Trump) cancelled the R&D tax credit.
The OBBA (2025 Trump) restored the R&D tax credit for tax year 2025.
Jane Goodall has died
Jane Goodall was a United Nations Messenger of Peace.
Jane Goodall: https://en.wikipedia.org/wiki/Jane_Goodall
"Dr. Jane Goodall Teaches Conservation" https://www.masterclass.com/classes/jane-goodall-teaches-con...
This one is more about the "apes" (primates),
"Primatologist Answers Ape Questions From Twitter" https://youtube.com/watch?v=z4BmXSBXz-c
Hunter S Thompson's death to be reviewed more than 20 years later
Gonzo journalism: https://en.wikipedia.org/wiki/Gonzo_journalism
The Battle of Aspen > Thompson's campaign for sheriff: https://en.wikipedia.org/wiki/The_Battle_of_Aspen :
Freak Power in the Rockies
Dynomite!FCC to consider ending merger ban among US broadcast networks
"FCC starts process that could loosen TV station ownership rules" (2025) https://youtube.com/watch?v=hvdXMx2cQfE :
> Currently no single company can own stations reaching more than 39% of TV households in America. Nexstar's proposed acquisition of Tegna would bring that combined company's reach to about 80%; double the current limit. The FCC is also expected to review a rule - tossed out by a federal appeals court in July - that a single company cannot own 2 of the top 4 TV stations in a market. In Denver, Nexstar already owns Fox 31 and Channel 2. The proposed merger would add 9 News and Channel 20. The CEO of Nexstar has said that multiple stations in the same city will be combined.
Who remembers why we are opposed to corporate consolidation in the media, given "fake news" and "media literacy" in their - the only - two corners?
From https://news.ycombinator.com/item?id=35676503 re: labels for state-funded and (S)PAC-funded media (and now sermons, too, of late, btw):
> Media literacy > Media literacy education: https://en.wikipedia.org/wiki/Media_literacy#Media_literacy_...
Though these are or would be Nexstar FOX stations, this film from the tumultuous and still unpaid-for wars of the 2000s is relevant again today:
"Outfoxed: Rupert Murdoch's War on Journalism" (2004) https://en.wikipedia.org/wiki/Outfoxed :
> "Fair and Balanced"
The internet says Trump hired Fox News execs Aisles (former RNC chair, former CEO of Fox News) in 2016, and Shine (O'Reilly) in 2018. 21st Century Fox sold to Disney in 2019. Fox News is now a wholly separate company from 21st Century Fox and Fox Theatres.
Now that the federal government has de-funded The Corporation for Public Broadcasting and thereby PBS and NPR, do they still have to carry the "state-sponsored media label"?
Oligarchy > United States: https://en.wikipedia.org/wiki/Oligarchy#United_states
Random Attractors – Found using Lyapunov Exponents (2001)
Is anyone doing anything besides visualizations with this chaos stuff? I liked the article linked below depicting the state space of artificial neurons: https://towardsdatascience.com/attractors-in-neural-network-...
Chaos theory > Applications: https://en.wikipedia.org/wiki/Chaos_theory#Applications
People use chaos theory to make predictions about attractor systems that have lower error than other models.
Claude Sonnet 4.5
System card: https://assets.anthropic.com/m/12f214efcc2f457a/original/Cla...
To @simonw and all the coding agent and LLM benchmarkers out there: please, always publish the elapsed time for the task to complete successfully! I know this was just a "it works straight in claude.ai" post, but still, nowhere in the transcript there's a timestamp of any kind. Durations seem to be COMPLETELY missing from the LLM coding leaderboards everywhere [1] [2] [3]
There's a huge difference in time-to-completion from model to model, platform to platform, and if, like me, you are into trial-and-error, rebooting the session over and over to get the prompt right or "one-shot", it's important how reasoning efforts, provider's tokens/s, coding agent tooling efficiency, costs and overall model intelligence play together to get the task done. Same thing applies to the coding agent, when applicable.
Grok Code Fast and Cerebras Code (qwen) are 2 examples of how models can be very competitive without being the top-notch intelligence. Running inference at 10x speed really allows for a leaner experience in AI-assisted coding and more task completion per day than a sluggish, but more correct AI. Darn, I feel like a corporate butt-head right now.
That's a good call, I'll try to remember that for next time.
Have you thought about benchmarking models a month or two after release to see how it competes vs the day 1 release
For that to be useful I'd need to be running much better benchmarks - anything less than a few hundred numerically scored tasks would be unlikely to reliably identity differences.
An organization like Artificial Analysis would be a better fit for that kind of investigation: https://artificialanalysis.ai/
Manually,
From https://news.ycombinator.com/item?id=40859434 :
> E.g promptfoo and chainforge have multi-LLM workflows.
> Promptfoo has a YAML configuration for prompts, providers,: https://www.promptfoo.dev/docs/configuration/guide/
openai/evals//docs/build-eval.md: https://github.com/openai/evals/blob/main/docs/build-eval.md
From https://news.ycombinator.com/item?id=45267271 ;
> API facades like OpenLLM and model routers like OpenRouter have standard interfaces for many or most LLM inputs and outputs. Tools like Promptfoo, ChainForge, and LocalAI also all have abstractions over many models.
> What are the open standards for representing LLM inputs, and outputs?
> W3C PROV has prov:Entity, prov:Activity, and prov:Agent for modeling AI provenance: who or what did what when.
> LLM evals could be represented in W3C EARL Evaluation and Reporting Language
"Can Large Language Models Emulate Judicial Decision-Making? [Paper]" https://news.ycombinator.com/item?id=42927611
"California governor signs AI transparency bill into law" (2025) https://news.ycombinator.com/item?id=45418428 :
Is this the first of its sort?:
> CalCompute
Google to merge Android and ChromeOS in 2026
If you can adb unlock, and it's not a closed box, then people can run F-Droid and install apps. Which means they can run independent path code without "sideload" in the apk download-and-install-by-hand sense. I guess for google, F-Droid IS sideloading.
If you unlock and you cannot run google wallet or your banking app, it's a closed box and the EU anti-monopoly lawsuit may still apply on this. But, if they can make a "trust" story run about LEA access to lawful decode or something, this might go away.
I'd say that the projections about fuschia and the like have turned out to be less interesting than some people hoped: but having two OS in the public eye (3 or more if you include Android TV and whatever closed systems run on Nest and Chromecast) was always a mistake.
I can live inside termux but there are things termux struggles to do, (like tcpdump maybe? and interacting simply with data downloaded from outside termux because of sandbox rules), which I very much would want.
I do not like how Android interacts with removable storage. It's an anti-pattern.
I think in general their plans are contrary to the DMA if they prevent F-Droid from existing. The big bet seems to be that Trump can coerce the EU into repealing the law entirely.
> (50) [...] In order to ensure that third-party software applications or software application stores do not endanger the integrity of the hardware or operating system provided by the gatekeeper, it should be possible for the gatekeeper concerned to implement proportionate technical or contractual measures to achieve that goal if the gatekeeper demonstrates that such measures are necessary and justified and that there are no less-restrictive means to safeguard the integrity of the hardware or operating system.
> (54) Gatekeepers can hamper the ability of end users to access online content and services, including software applications. Therefore, rules should be established to ensure that the rights of end users to access an open internet are not compromised by the conduct of gatekeepers. Gatekeepers can also technically limit the ability of end users to effectively switch between different undertakings providing internet access service, in particular through their control over hardware or operating systems. This distorts the level playing field for internet access services and ultimately harms end users. It should therefore be ensured that gatekeepers do not unduly restrict end users in choosing the undertaking providing their internet access service.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%...
Surely most governments have a compelling interest in preserving the ability to sideload apps on Android for software development, information security research, and preserving the open competitive ecosystems that so many bought into and invested in with such terms.
The ability for open source software developers to write and run applications on their [fork of AOSP with a bunch of binary closed source out-of-tree kernel modules] devices should be protected, in order to prevent anti-competitive practices from squandering the open platform the community has helped to build.
Play Store requires a DUNS number and registration therefore these days.
F-Droid does not require a DUNS number for app upload.
(F-Droid is one of a number of third party APK registry and APK installer services. The F-Droid web service hosts signed Android "APK" software packages and updates which can be uploaded by registered users and downloaded without registration or login. The F-Droid application installs APKs from the F-Droid web service; though app install and update requires more taps to install or update multiple packages due to Android's lack of functionality to add third-party package repos with keys, a standard feature in modern Linux software package management systems.)
Android app developers can already choose whether their app can be installed or run on a device that doesn't pass Play Integrity checks.
If non-rooted third-party AOSP forks with recent Security Patch Levels fail Play Integrity checks and thus cannot work with retail banking apps for example, then old versions of Android for which there are no longer updates should also fail Play Integrity checks.
Open standards for modern software management include: schema.org/SoftwareApplication , W3C Verifiable Credentials, Sigstore, SLSA, and OCI Artifact registries which already support signatures.
There are various tools which sideload APKs over HTTPS without any checksum or signature (e.g. from GitHub releases instead of from for example an OCI Registry) which are as reckless as curl | sh.
Couldn't bash and zsh run in a container2wasm WASM container that, in a browser tab without install, gets its own SELinux security context like all apps since Android 4.4+?
Does ls -Z work in Android Terminal (or termux, or the ChromeOS term)?
Students and Family Link accounts are currently denied access to containers on Chromebooks.
So on a Chromebook the same curriculum is limited to JupyterLite in WASM which almost works offline in a browser, instead of a local repo2docker container or a devcontainer.json (because there is no money for students to have server resources (like shells, CI, GitLab+k8s resource quotas) other than their provisioned computer).
container2wasm: https://github.com/container2wasm/container2wasm :
$ c2w ubuntu:22.04 out.wasmLinus Learns Analog Circuits
USB pedals for a software modular synth would be a good project, too.
From https://news.ycombinator.com/item?id=44120903 :
> BespokeSynth is an open source "software modular synth" DAW that can host LV2 and VST3 plugins like Guitarix, which can also add signal transforms like guitar effects pedals. Tried searching for an apparently exotic 1A universal power supply. Apparently also exotic: A board of guitar pedals with IDK one USB-A and a USB-C adapter with OSC and MIDI support; USB MIDI trololo pedals
From "Python notebooks for fundamentals of music processing" https://news.ycombinator.com/item?id=40555387
> Additional Open Source Music and Sound Production tools:
Brandon's Semiconductor Simulator lists what all is not yet modeled. "Basic equations of semiconductor device physics [pdf]" https://news.ycombinator.com/item?id=44723304 :
> Notes re: "Brandon's circuit simulator", which doesn't claim to model vortices in superconductors or the Quantum Anomalous Hall Effect, for example; https://news.ycombinator.com/item?id=43942279#43948096
electronics.stackexchange has CircuitLab built-in; TinkerCAD has circuit assembly and Python on Arduino in a free WebUI, but it's not open source. Wokwi and Pybricks (MicroPython on LEGO smart hubs over web bluetooth) are open core.
LPub3D is an open source LDraw editor for LEGO style digital building instructions. LeoCAD works with the LDraw parts library.
"WebUSB Support for RP2040" https://news.ycombinator.com/item?id=38007967 :
> USB 2x20 pin (IDE cable) GPIO
FWIU Fuzix and picoRTOS will actually run on a RP2040/2350W. 2350W have both ARM-Cortex and RISC cores, but something like an STM can work for months on a few batteries.
Nebulised Heparin treats Covid-19, ARDS, viral, bacterial respiratory infections
ScholarlyArticle: "Efficacy of inhaled nebulised unfractionated heparin to prevent intubation or death in hospitalised patients with COVID-19: an investigator-initiated international meta-trial of randomised clinical studies" (2025) https://www.thelancet.com/journals/eclinm/article/PIIS2589-5...
3D Printing of Magnesium-Containing Biomedical Materials for Bone Repair
"In situ printing of biodegradable implant for healing critical-sized bone defect" (2025) https://www.cell.com/device/fulltext/S2666-9986(25)00186-3 .. "Scientists develop 'glue gun' that 3D prints bone grafts directly onto fractures" (2025) https://news.ycombinator.com/item?id=45141049
> Osteopromotive: https://en.wikipedia.org/wiki/Osteopromotive :
>> Osteopromotive describes a material that promotes the de novo formation of bone.
"Effect of Mg incorporation on the properties of PCL/Mg composites for potential tissue engineering applications" (2024) https://www.frontiersin.org/journals/materials/articles/10.3... ... ( citations: https://scholar.google.com/scholar?cites=1104751989757627663... ) :
> Abstract: [...] The findings demonstrated that adding Mg influenced PCL’s mechanical and thermal properties. The mechanical test results showed that the tensile strength of 15% Mg composite filaments improved by around 10% compared to the neat PCL filaments. However, the elastic modulus decreased by around 50% for the same composition. The thermal study revealed a significant reduction in the degradation temperature from above 400°C for pure PCL to around 300°C for PCL/Mg composite having 15% Mg. Additionally, the weight loss during in vitro degradation showed that the presence of Mg had significantly increased the degradation rate of composite samples. Also, Mg incorporation influences cell adhesion, with better attachment observed for 10% Mg 3DP samples. Overall, PCL/Mg composites offer a solution to overcome the limitation of low thermo-mechanical properties typically associated with the PCL
The von Neumann bottleneck is impeding AI computing?
Actual result: "This new process promises to increase the number of optical fibers that can be connected at the edge of a chip, a measure known as beachfront density, by six times."
Faster interconnects are always nice, but this is more like routine improvement.
"In recent inference tests run on a 3-billion-parameter LLM developed from IBM’s Granite-8B-Code-Base model, NorthPole was 47 times faster than the next most energy-efficient GPU and was 73 times more energy efficient than the next lowest latency GPU."
It's also fascinating that they are experimenting with analog memory because it pairs so well with model weights
Their NorthPole chip doesn't look much different than the Groq LPU or Tenstorrent's hardware or even just AMD's NPU design. The tenstorrent cards have a pretty big amount of SRAM considering their price.
Rust to fuel: Green rust catalyst developed for cost-effective hydrogen storage
"Turning rust into fuel: Green rust catalyst developed for cost-effective hydrogen storage" (2025) https://phys.org/news/2025-09-rust-fuel-green-catalyst-effec...
ScholarlyArticle: "A Catalyst for Sodium Borohydride Dehydrogenation Based on a Mixed-Valent Iron Hydroxide Platform" (2025) https://pubs.acs.org/doi/10.1021/acscatal.5c01894
GDPVal: Measuring the performance of our models on real-world tasks
"GDPVal: Measuring AI model performance on real world economically viable tasks" (2025) https://cdn.openai.com/pdf/d5eb7428-c4e9-4a33-bd86-86dd4bcf1...
GDP? GlobalGoals ... The Sustainable Development Goals (SDGs) include 17 goals, 169 targets, and over 230 indicators.
For strategic alignment,
Strategic alignment: https://en.wikipedia.org/wiki/Strategic_alignment
Sustainable Development Goals: https://en.wikipedia.org/wiki/Sustainable_Development_Goals
To produce the SDGs, IIUC they clustered the world's problems as an international collaborative exercise; to succeed the MDGs (2000-2015).
Each country voluntarily produces an annual SDG report on their progress on their Targets according to the Indicators.
IMHO, Priorities should include clean energy and AI efficiency, given the growth projections for energy use of AI (and our electrical bills given continued expected supply shortages of energy)
Which real-word SDG tasks can be AI eval'd?
Apparently producing a react component that returns a piece of html with aria tags set up. Long horizon my ass.
Did the LLM in that case suggest adopting an open-source UI library that already has tests for and implements support for W3C ARIA accessibility features, like React-Aria or other alternatives?
Or did it just do the job as prompted and not mention suggestions for continuous improvement like reusing tested open source components?
Not sure how it went in their tests - I've tried Opus and GPT5 and it was few lines of react + tests, so I guess 'no'
Adding Capability Hardware Enhanced RISC Instructions (CHERI) to Linux
CHERI: Capability Hardware Enhanced RISC Instructions: https://en.wikipedia.org/wiki/Capability_Hardware_Enhanced_R...
Is the (page-level) NX bit almost capability based addressing?
Capability-based addressing: https://en.wikipedia.org/wiki/Capability-based_addressing
Do YC after you graduate: Early decision for students
We announced something today at YC called Early Decision, specifically for students. It's a relatively small change, but we thought people might be interested to hear the thinking behind it, even if you don't happen to be a student graduating this year.
A year ago, YC went from running 2 batches / year to 4 batches / year. We did this because we wanted to give founders more flexibility to do YC at the right time for them. It seems to have worked - a lot of founders have told us that they were only able to do YC because the new schedule fit their timeline.
Early Decision was driven by the same motivation. We talked to a lot of college students, and we learned that most graduating seniors interview for their after-graduation job in the fall of their senior year. For the ones who are interested in doing their own startup, this creates a bit of a dilemma. If they don't interview for jobs in the fall in order to apply to YC later, they're risking that they might be left without any options.
We created Early Decision so that they can apply to YC at the same time they're doing recruiting for regular jobs, the fall of their senior year. If they get into YC, they can confidently turn down their other job offers without worrying they'll be left without anything.
Note: this isn't really a new idea. We've quietly done this from time to time since 2018, but we didn't create a dedicated flow in the application software for it, so most people didn't realize it was an option. Hopefully by productizing and popularizing it, we'll make it easier for college seniors to start companies.
Does YC ever intend to revisit doing remote batches again?
There's many founders in the country who are just as driven and motivated, but have real-world situations that cannot allow uprooting themselves for several months, two very common ones:
- new parents
- disabled family members, or are themselves physically disabled
The discourse on Hacker News has frequently chastised companies demanding RTO, and some of the companies in your portfolio are remote-first (or remote-only), why does YC make the same kind of RTO demand with batches?
From "Ask HN: How to Price a Product" https://news.ycombinator.com/item?id=41180492#41220971 :
> Asset Value = Equities + Liabilities
> /? startupschool pricing: https://www.google.com/search?q=startupschool+pricing
/? site:startupschool.org pricing: https://www.google.com/search?q=site:startupschool.org+prici...
> Startup School > Curriculum > Ctrl-F pricing: https://www.startupschool.org/curriculum
YC Library: https://www.ycombinator.com/library
/? YC Library : pricing: https://www.ycombinator.com/library/search?query=Pricing
Launch HN: Flywheel (YC S25) – Waymo for Excavators
Hey HN, We're Jash and Mahimana, cofounders of Flywheel AI (https://useflywheel.ai). We’re building a remote teleop and autonomous stack for excavators.
Here's a video: https://www.youtube.com/watch?v=zCNmNm3lQGk.
Interfacing with existing excavators for enabling remote teleop (or autonomy) is hard. Unlike cars which use drive-by-wire technology, most of the millions of excavators are fully hydraulic machines. The joysticks are connected to a pilot hydraulic circuit, which proportionally moves the cylinders in the main hydraulic circuit which ultimately moves the excavator joints. This means excavators mostly do not have an electronic component to control the joints. We solve this by mechanically actuating the joysticks and pedals inside the excavators.
We do this with retrofits which work on any excavator model/make, enabling us to augment existing machines. By enabling remote teleoperation, we are able to increase site safety, productivity and also cost efficiency.
Teleoperation by the operators enables us to prepare training data for autonomy. In robotics, training data comprises observation and action. While images and videos are abundant on the internet, egocentric (PoV) observation and action data is extremely scarce, and it is this scarcity that is holding back scaling robot learning policies.
Flywheel solves this by preparing the training data coming from our remote teleop-enabled excavators which we have already deployed. And we do this with very minimal hardware setup and resources.
During our time in YC, we did 25-30 iterations of sensor stack and placement permutations/combinations, and model hyperparams variations. We called this “evolution of the physical form of our retrofit”. Eventually, we landed on our current evolution and have successfully been able to train some levels of autonomy with only a few hours of training data.
The big takeaway was how much more important data is than optimizing hyperparams of the model. So today, we’re open sourcing 100hrs of excavator dataset that we collected using Flywheel systems on real construction sites. This is in partnership with Frodobots.ai.
Dataset: https://huggingface.co/datasets/FlywheelAI/excavator-dataset
Machine/retrofit details:
Volvo EC380 (38 ton excavator)
4xcamera (25fps)
25 hz expert operator’s action data
The dataset contains observation data from 4 cameras and operator's expert action data which can be used to train imitation learning models to run an excavator autonomously for the workflows in those demonstrations, like digging and dumping. We were able to train a small autonomy model for bucket pick and place on Kubota U17 from just 6-7 hours of data collected during YC.We’re just getting started. We have good amounts of variations in daylight, weather, tasks, and would be adding more hours of data and also converting to lerobot format soon. We’re doing this so people like you and me can try out training models on real world data which is very, very hard to get.
So please checkout the dataset here and feel free to download and use however you like. We would love for people to do things with it! I’ll be around in the thread and look forward to comments and feedback from the community!
Would you be able to replicate this with the heavy equipment and movements needed to plug orphaned oil wells? In Texas alone, it's a TAM of ~$38B, and ~$150B for the entire US.
The Looming Disaster Under America's Biggest Oil Field [video] - https://news.ycombinator.com/item?id=45361022 - September 2025
Texas has thousands of abandoned oil and gas wells. Who is responsible for cleaning them up? - https://www.texastribune.org/2025/05/08/texas-orphan-wells-e... - May 8th, 2025
The Rising Cost of the Oil Industry’s Slow Death - https://www.propublica.org/article/the-rising-cost-of-the-oi... - February 22nd, 2024
Well plugging SOP:
Could a mini tunnel boring machine plug a well, from the side?
That's definitely a thing.
Is there a name for resealing an aquifer at each layer? Rezonal isolation, Zonal isolation, Zonal Re-isolation?
/? Zonal isolation Wikipedia: https://www.google.com/search?q=zonal+isolation+Wikipedia :
> Well cementing, Completion, Squeeze job, Cement bond log, Casing (should prevent fluid from contaminating e.g. aquifer zones)
Orphan wells: https://en.wikipedia.org/wiki/Orphan_wells :
> they estimate there are 29 million abandoned wells internationally
Orphaned wells in the United States: https://en.wikipedia.org/wiki/Orphaned_wells_in_the_United_S... :
> According to the Government Accountability Office, the 2.1 million unplugged abandoned wells in the United States could cost as much as $300 billion
Information could be a fundamental part of the universe
Energy is conserved (locally (*) eventually), but is there symmetry in or conservation of information?
Wave your hand in the water: is the information about the temporary fluidic disturbance gone? Does information about the splash in the water displace other information?
...
Postulated years ago a fundamental gbit; but they decided there that there could be no simultaneous encoding in a fundamental gbit.
"Existence of an information unit as a postulate of quantum theory" (2013) https://www.pnas.org/doi/10.1073/pnas.1304884110 :
> Also, our postulates unveil some connections between physics and information that remain hidden in the standard postulates, thus supporting Wheeler’s hypothesis “it from bit.”
The amplituhedron folks likely have insight on quantum geometry: spacetime (and gravity) as emergent, and you don't have to do so many Feynman diagrams because the amplituhedron describes those relations too.
I don't know how fairly recent rejections of definite causal order affect conceptions of fundamental quantum information? (Goëdel did point such out in regards to GR)
MLB approves robot umpires for 2026 as part of challenge system
Microsoft microfluidic channels cool GPU 65%, outperform cold plates by up to 3x
"Microsoft develops breakthrough chip cooling method — microfluidic channels can cut peak temps by up to 65%, outperform conventional cold plates by up to 3x" (2025) https://www.tomshardware.com/pc-components/liquid-cooling/mi...
"AI chips are getting hotter. A microfluidics breakthrough goes straight to the silicon to cool up to three times better." (2025) https://news.microsoft.com/source/features/innovation/microf...
Graphene based chips would have less thermal loss.
Graphene heat sinks without thermal paste:
"Graphene based CPU coolers" (2025) https://www.pcgamer.com/hardware/processors/cyberpower-begin...
"Graphene thermal pad for AMD CPUs promises 17X better conductivity than thermal paste, 2X improvement over Thermal Grizzly" (2025) https://www.tomshardware.com/pc-components/thermal-paste/gra...
...
What work can channeled heat do?
"Electrically gated molecular thermal switch" (2023) https://www.science.org/doi/10.1126/science.abo4297 ... "Thermal transistors handle heat with no moving parts" (2023) https://news.ycombinator.com/item?id=38270523
FWIU it takes extra heat to heat pipe heat e.g. to a thermal energy recovery area of a datacenter.
New advances in thin-film thermoelectrics for cooling not energy harvesting IIUC: "New thermoelectric cooling breakthrough nearly doubles efficiency" (2025) https://news.ycombinator.com/item?id=45323213
First Zero-Water Zero-Emission 142kW Hydrogen-Powered GB300 Datacenter
Am I overlooking their mention of the hydrogen source? 'Cause to quote Wikipedia -
> Nearly all of the world's current supply of hydrogen is created from fossil fuels.
> As of 2023, less than 1% of dedicated hydrogen production is low-carbon, i.e. blue hydrogen, green hydrogen, and hydrogen produced from biomass.
The front page of Hydrogen Central today has:
"World’s Largest Green Hydrogen Plant release first-ever footage after achieving more than 80% Construction Completion across all sites" (2025) https://hydrogen-central.com/neom-worlds-largest-green-hydro...
Is hydrogen from would-be-landfilled unsorted plastics (with plasma, EM induction, and/or flash heating) "teal" hydrogen?
Blue because sourced from hydrocarbons, Green because diverting plastic from landfills to hydrogen and graphene?
"Hydrogen-Powered Plasma Torch Decimates Plastic Waste in a Blink" (2025) https://news.ycombinator.com/item?id=45127089 .. https://www.kimm.re.kr/eng/sub011001/view/id/1435 :
> The plasma process developed by the program team overcomes these limitations. Its ultra-high-temperature operation rapidly breaks down polymer structures while suppressing carbon formation by using 100% hydrogen fuel. As a result, the process not only secures long-term operational stability but also enables the selective conversion of over 70–80% of the outputs into ethylene and benzene. Notably, even waxes—previously unusable in pyrolysis—could be converted at more than 80% selectivity, boosting energy efficiency.
Can this hydrogen plasma plastic recycling process be tuned down to intentionally produce graphene as a byproduct?
> Maybe rGO reduced graphene oxide wafers could be deoxidized with hydrogen plasma, thus eliminating PFAS-containing photoresist
Another hydrogen plasma question and opportunity:
Would there be net Hydrogen from deoxidizing Aluminum (Al2O3) with Hydrogen cold plasma, maybe through a water loop? Would that sterilize the water? Would that demineralize the water? Hydrolysis and/or a fuel cell?
I'm not sure how an H2 plant in Saudi Arabia, due to start producing in 2027, relates to my question.
If the actual goal is CO2 reduction, and 99% of H2 production is "full-carbon" - https://en.wikipedia.org/wiki/Hydrogen_production - then why aren't they just building a conventional DC, but with no-water-use cooling? The electricity could come from solar arrays & wind farms, with batteries or something for storage.
Vs. if it's just a "Hydrogen is Trending" PR exercise - that seems a much better fit for the facts.
> As of 2023, less than 1% of dedicated hydrogen production is low-carbon, i.e. blue hydrogen, green hydrogen, and hydrogen produced from biomass.
There is at least one large green hydrogen producer.
I don't know how that changes the total energy problems of hydrogen production and storage. Is it that new methods of hydrogen production have more efficiency?
Let's hope for more green hydrogen production.
A plentiful catalyst like Aluminum might make more green hydrogen, for which there are numerous applications like deoxidizing aluminum and deoxidizing reduced graphene oxide wafers for semiconductor and superconductor production.
"Lambda and ECL Bring the First Hydrogen-Powered NVIDIA GB300 NVL72 Systems Online" (2025) https://www.businesswire.com/news/home/20250923779565/en/Lam...
Markov chains are the original language models
Markov chain > History: https://en.wikipedia.org/wiki/Markov_chain#History
Examples of Markov chains: https://en.wikipedia.org/wiki/Examples_of_Markov_chains
A Hopfield network is an RNN Recurrent Neural Network.
Hopfield network: https://en.wikipedia.org/wiki/Hopfield_network
From "Ask HN: Parameter-free neural network models: Limits, Challenges, Opportunities?" (2024) https://news.ycombinator.com/item?id=41794272 re: neural network topologies :
> The Asimov Institute > The Neural Network Zoo: https://www.asimovinstitute.org/neural-network-zoo
> PNG: "A mostly complete chart of neural networks" (2019) includes Hopfield nets!" https://www.asimovinstitute.org/wp-content/uploads/2019/04/N...
> [ Category:Neural network architectures, Types of artificial neural networks ]
Cardboard-confined rammed earth towards sustainable construction
ScholarlyArticle: "Cardboard-confined rammed earth towards sustainable construction" (2025) https://www.sciencedirect.com/science/article/pii/S235201242...
"Making Corrugated Cardboard Stronger And Waterproof" (2025) https://hackaday.com/2025/06/15/making-corrugated-cardboard-... re: "Learn to Build With Cardboard! STRONG, Waterproof and Free." by NightHawkInLight: https://youtube.com/watch?v=45JhacvmXV8 :
> Gluing multiple panels together so that the corrugation alternates by 90 degrees every other panel makes them more sturdy, with wheat paste (1:5 mixture of flour and water) recommended as adhesive.
> Other tricks are folding over edges help to protect against damage, and integrating wood supports. Normal woodworking tools like saws can cut these glued-together panels. Adding the wheat paste to external surfaces can also protect against damage. By applying papier-mâché skills, a custom outside layer can be made that can be sanded and painted for making furniture, etc.
--
"Concrete draping" is like paper-mache with landscape fabric soaked in concrete; to make planters, sculptures, possibly decorative facades
More CEB and sustainable construction and dream earthship notes:
From "Wikihouse: Open-Source Houses" https://news.ycombinator.com/item?id=38932603#38935713 .. https://westurner.github.io/hnlog/#comment-38935713 re: https://www.wikihouse.cc/ :
> TIL about The Liberator: The world's first open source compressed earth brick press. https://www.opensourceecology.org/back-to-compressed-earth-b...
> A multiple-CEB unit that makes interlocking blocks that don't require mortar could build on work from this project.
What about cardboard and rammed earth blocks?
> Add'l notes on CEB, Algae, Sargassum, Hemp in the 2024 International and US Residential Building Code, LEGO-like Hempcrete block: https://news.ycombinator.com/item?id=37693225
> FWIU Round homes fare best in windstorms: https://news.ycombinator.com/item?id=37175721#37188180
Deltec homes builds round homes (optionally on stilts) out of hurricane-prone North Carolina that consistently outperform in storms.
> And curvy half walls one brick wide don't fall down:
> [CEB] "Crinkle crankle wall" https://en.wikipedia.org/wiki/Crinkle_crankle_wall
> Some interlocking bricks don't require mortar.
Just BioFiber has developed LEGO-like Stacking, interlocking hempcrete blocks on structural forms, and an off-site forming and drying process.
> Are non-leaching bioplastic frames or filler comparatively economical for interlocking CEB?
InventWood has a "superwood" densified wood product that's 10X the strength of steel.
HempWood is a compressed tensile fiber product that's 20% stronger than Oak dimensional lumber of the same dimensions.
CEB: Compressed Earth Block: https://en.wikipedia.org/wiki/Compressed_earth_block
RPM 6.0 Released with OpenPGP Improvements and Signature Checking by Default
"RPM 6.0.0 Release Notes" (2025) https://rpm.org/releases/6.0.0
"RPM 6.0 Released With OpenPGP Improvements & Enforces Signature Checking By Default" (2025) https://www.phoronix.com/news/RPM-6.0-Released
By comparison, Python has removed PGP signature support with PEP 761 and instead depends upon sigstore fulcio.
/? https://www.google.com/search?q=removed+gpg+and+sigstore+onl...
IIRC OpenPGP signatures do work with W3C VC; there's a URI for the key type and algorithm?
"Chapter 8. Signing container images" and any other OCI artifact: https://docs.redhat.com/en/documentation/red_hat_enterprise_... :
> You can use a GNU Privacy Guard (GPG) signature or a sigstore signature to sign your container image
--
"What does a PGP signature on a Git commit prove?" https://news.ycombinator.com/item?id=26640915
"Git-signatures – Multiple PGP signatures for your commits" (2019) https://news.ycombinator.com/item?id=19183803#19186012
"Linked Data Signatures for GPG" > GpgLinkedDataKeyClass2020, GpgSignature2020: https://gpg.jsld.org/ .. spec: https://gpg.jsld.org/contexts/
"PGP Vocabulary v1" (2021) > PgpVerificationKey2021, PgpSignature2021:https://or13.github.io/lds-pgp2021/
"Verifiable Credentials with PGP" (2022) https://transmute-industries.github.io/vc-pgp/
--
A blog post from 2022 on how to do artifact key revocation with Sigstore Fulcio, Rekor, and AWS Lambda; but revocation transparency https://blog.sigstore.dev/dont-panic-a-playbook-for-handling...
"Why you can’t use Sigstore without Sigstore" (2023) https://blog.sigstore.dev/why-you-cant-use-sigstore-without-...
"Model authenticity and transparency with Sigstore" https://next.redhat.com/2025/04/10/model-authenticity-and-tr...
sigstore/model-transparency: https://github.com/sigstore/model-transparency
Simplifying Cross-Chain Transactions Using Intents
Isn't (cross-ledger) pathfinding possible with ILP Interledger Protocol? (As it is with ODL On-Demand Liquidity pathfinding.)
ILP was specifically designed to find the most efficient path for a payment to travel across a network of different ledgers.
Yea, but intents in this case is solving a different problem, which is user just declares the outcome and the solvers figure out the execution, could be through ILP, DEXs, bridges, whatever is perfect at that moment
With ILP, those cross-chain flows are auditably accounted for in one transaction.
DEX and bridges (and banks with traditional asset ledgers) could implement ILP to become ILP Connectors.
ILP does trustless atomic swaps with Hashed Timelock Agreements (HTAs).
Completely true, but who decides which rails to execute a txn on? And that's where solvers would come in. There are many ways to architect this, just primarly depends on the problem being solved and what solution is optimal at scale
Pathfinding is based on (lowest) path costs.
Solvers could be implemented as ILP Senders.
ILP has trust lines: how much [money] each party trusts each other party [with] is up to them.
Whether parties obey KYC/AML in pathfinding might be the higher risk part; mustn't the system disallow lower cost but higher risk paths
I do agree! But really, it depends on the exact problem being solved and the constraints you're optimizing for, we could collaborate on a joint article and go more in-depthly into these two, let me know what you think
Here's this about x402 and ILP and HTAs: https://news.ycombinator.com/context?id=45348242
Docs, Specs, Use Cases;
ILP terminology from Rafiki: https://rafiki.dev/overview/concepts/interledger/ :
> Packet, Peer, Connectors (Sender, Connector, Receiver), Payment pointer, SPSP Simple Payment Setup Protocol, STREAM Protocol, STREAM receipt
ILP finds routes from Senders through Connectors to ILP Receivers. An ILP Connector does Pathfinding, Quoting, and Forwarding. There are ILP Addresses, but there is no global routing table.
Agents turn simple keyword search into compelling search experiences
x402 — An open protocol for internet-native payments
How are Hashed Timelock Agreements (HTA) like in the Interledger Protocol (ILP) and WebMonetization Protocol more secure than x402?
Does x402 prevent the double-spending problem?
Isn't it regressive to return to dependence on DNS for financial transactions?
>Does x402 prevent the double-spending problem?
This depends on the implementation on the underlying network, but basically the spending signs an authorization for transfer, and the merchant either settles that onchain themselves or delegates to what is called a facilitator that settles on their behalf. On EVM chains for the exact payment scheme this leverages EIP-3009 signatures
ILP (Ripple, FedNow,) has Connectors. I just had this conversation about "Intents" and ILP Connectors: https://news.ycombinator.com/context?id=45296648
"Powering AI commerce with the new Agent Payments Protocol (AP2)" https://cloud.google.com/blog/products/ai-machine-learning/a... :
> AP2 builds trust by using Mandates—tamper-proof, cryptographically-signed digital contracts that serve as verifiable proof of a user's instructions. These mandates are signed by verifiable credentials (VCs) and act as the foundational evidence for every transaction.
google-a2a/a2a-x402: A2A x402 extension: https://github.com/google-a2a/a2a-x402
SingularityNET is this concept too, FWIU. https://github.com/singnet
So A2A has W3C VC Verifiable Credentials (and DIDs), but not x402?
Re: ILP payment pointers, DNS, blockerts (W3C VC) https://news.ycombinator.com/item?id=42961635 :
> How can or should a Blockcert indicate an ILP Interledger Protocol address or a Payment Pointer?
In order to avoid DNS. Basically because gethostbyname() does not indicate DNSSEC validation status, or channel sec status e.g. whether there's DoH/DoT/DoQ at every edge in the DNS network), or CT Certificate Transparency log cert revocation status (and OCSP and CRL are in-band))
How can ILP and x402 (and IDK EDNS) be integrated? Are they complementary?
> Think of x402 as the universal "cash register" signal and ILP as the versatile "payment network" that can handle any currency. [...] and pathfinding with path cost and HTA Hashed-Timelock Agreements for the whole path, with an auditable open spec message standard that accounts for each of the Connectors involved (who specify credit limits).
> So, x402 can signal the need for a payment, and ILP can be the underlying mechanism to fulfill that payment request, regardless of the user's preferred currency or payment provider
How do x402 and ILP SPSP Simple Payment Setup Protocol compare in terms of signaling the need for a payment?
> SPSP is a simplified, connectionless mode of Interledger that is often used for streaming micropayments, as seen in the Web Monetization standard. The signaling is more implicit and is discovered through HTML/HTTP, rather than being an HTTP status code itself.
From "HTTP 402: Payment Required" (2020) https://news.ycombinator.com/item?id=22214156 :
> The new W3C Payment Request API [4] makes it easy for browsers to offer a standard (and probably(?) already accessible) interface for the payment data entry screen, at least. https://www.w3.org/TR/payment-request/
There's probably a better HTTP Status dog for 402?
One-step synthesis of graphene containing topological defects
> Abstract: [...] We present a one-step chemical vapour deposition procedure aimed at retaining the precursor topology when incorporated into the grown carbonaceous film. When azupyrene, the molecular analogue of the Stone-Wales defect in graphene, is used as a precursor, carbonaceous monolayers with a range of morphologies are produced as a function of the copper substrate growth temperature. The higher the substrate temperature during deposition, the closer the resulting monolayer is to ideal graphene. Analysis, with a set of complementary materials characterisation techniques, reveals morphological changes closely correlated with changes in the atomic adsorption heights, network topology, and concentration of 5/7 membered carbon rings.
Does this make low temperature superconductors like trilayer and pentalayer rhombohedral graphene easier to make?
New thermoelectric cooling breakthrough nearly doubles efficiency
ScholarlyArticle: "Nano-engineered thin-film thermoelectric materials enable practical solid-state refrigeration" (2025) https://www.nature.com/articles/s41467-025-59698-y :
> Abstract: Refrigeration needs are increasing worldwide with a demand for alternates to bulky poorly scalable vapor compression systems. Here, we demonstrate the first proof of practical solid-state refrigeration, using nano-engineered controlled hierarchically engineered superlattice thin-film thermoelectric materials. [...] The improved efficiency and ultra-low thermoelectric materials usage herald a new beginning in solid-state refrigeration.
Micro-LEDs boost random number generation
ScholarlyArticle: "Micro-LED based quantum random number generators" (2025) https://opg.optica.org/oe/fulltext.cfm?uri=oe-33-11-22154&id...
Gartner Says Worldwide AI Spending Will Total $1.5T in 2025
EU ministers reach 'compromise' on digital euro roadmap
> The ECB has pitched the digital euro as a way to cut Europe's reliance on U.S. credit cards and as a response to U.S. President Donald Trump's global push for stablecoins pegged to the U.S. dollar.
Over 66 countries of the world peg their currency to the US Dollar; by buying and selling to keep their currency's exchange price fairly close to USD.
There may be value in reviewing the pushback on Diem; why they didn't want one global stablecoin and why they didn't want regionally-backed stablecoins on a centralized ledger tentatively backed by established competitors like PayPal Visa, Mastercard, Coinbase, and Stripe.
Diem was Apache licensed open source software
There are no fees to use ILP. ILP is an open standard.
"ILP: Peering, Clearing, and Settlement": https://interledger.org/developers/rfcs/peering-clearing-set...
Luau – Fast, small, safe, gradually typed scripting language derived from Lua
I used to use Lua and later LuaJIT in Lumix Engine. I switched to Luau because of its type system. However, it's apparent it was not meant to be used outside Roblox, as it has many rough corners. The documentation is not great, and the community is basically nonexistent - I got zero results when searching for any issues I encountered. Also, it's huge compared to Lua or LuaJIT, causing my project to compile 7x slower. The API is not great (e.g., an async API that blocks, using STL in the API, leaking STL headers). I encounter bugs with analysis/LSP often. Overall, I consider moving away from it.
We definitely intend on folks being able to use Luau outside of Roblox, and we know of a number of folks doing so quite successfully including Remedy Entertainment (Alan Wake 2), Digital Extremes (Warframe), GIANTS Software (Farming Simulator 25).
That being said, it has been historically hard to get major investment into work actively supporting growth of the language off-platform since our entire team is employed to work on the project by Roblox. We are nevertheless changing this though, and investing in the language outside of the platform. As some folks have already mentioned here, we have a general-purpose standalone runtime that we're developing called Lute that's focused on using Luau outside of Roblox to write general-purpose programs, and we're building a whole suite of Luau-programmable developer tools for the language atop it.
It takes time to build things, and the Luau ecosystem is definitely still very young as you've noted, but it's something that we care a lot about and are investing in considerably going forward. We 100% believe that the best thing for the health of the language and the ecosystem is to support more diverse users and more diverse use-cases.
Have you considered using wasm as the foundation for Roblox, instead of Luau?
LunarEngine is built on raylib, which compiles to WASM. FWIU it might be possible to compile a Luau game to WASM with LunarEngine eventually.
"LunarEngine: An open source, Roblox-compatible game engine" (2025) https://news.ycombinator.com/item?id=44995147
A Membraneless Electrochemically Mediated Amine Regeneration for Carbon Capture
> Abstract: [...] This study presents a membraneless EMAR system by fundamentally redesigning the process configuration and using gas diffusion electrodes (GDEs) as both the anode and cathode. [...] A techno-economic analysis estimates a levelized cost of capture of ~$70/tonneCO2, compared to $137/tonneCO2 for conventional EMAR
FWIU the cost is under $50 per ton for capturing concentrated CO2 from industrial sources like power plants or natural gas plants?
New Python CLI Tool Catches MCP Server Issues Before Agents Do
microsoft/mcp-interviewer: https://github.com/microsoft/mcp-interviewer
SpikingBrain Technical Spiking Brain-Inspired Large Models
"Researchers get spiking neural behavior out of a pair of transistors" (2025) https://news.ycombinator.com/item?id=43506198
What are the ways to get spiking behavior out of integrated nanophotonics?
Saturable Absorption (excitable semiconductor lasers, graphene laser cavity,), NDR Negative Differential Resistance (RTD Resonant Tunneling Diodes,), PCM: Phase-change materials (DVD-RW,),
Metamaterials and metasurfaces are probably useful for extreme nonlinear spiking neuromorphic computing with integrated nanophotonics.
What about Optical rogue waves, Supercontinuum generation (color, wave division multiplexing, ); and/or Superradiance (as nonlinear optical effects for a neuromorphic computation platform)? ... https://news.ycombinator.com/item?id=41684444
Superradiance: https://en.wikipedia.org/wiki/Superradiance :
> Superradiance has since been demonstrated in a wide variety of physical and chemical systems, such as quantum dot arrays [4] and J-aggregates. [5] This effect has been used to produce a superradiant laser.
Superradiance in semiconductor optics -> Coherent effects in semiconductor optics > Superradiance of excitons: https://en.wikipedia.org/wiki/Coherent_effects_in_semiconduc...
Generative AI as Seniority-Biased Technological Change
It's pretty clear this is happening.
The question is... is this based on existing capability of LLMs to do these jobs? Or are companies doing this on the expectation that AI is advanced enough to pick up the slack?
I have observed a disconnect in which management is typically far more optimistic about AI being capable of performing a specific task than are the workers who currently perform that task.
And to what extent is AI-related job cutting just an excuse for what management would want to do anyway?
I do not see anything in this study that accounts for the decline in economic activity. Is it AI replacing the jobs, or is it that companies are not optimistically hiring, which disproportionally impacts entry level jobs?
Did tech reduce hiring after Section 174 R&D tax policy changes?
From https://news.ycombinator.com/item?id=45131866 :
> In 2017 Trump made businesses have to amortize these [R&D] expenses over 5 years instead of deducting them, starting in 2022 (it is common for an administration to write laws that will only have a negative effect after they're gone). This move wrecked the R&D tax credit. Many US businesses stopped claiming R&D tax credits entirely as a result. Others had surprise tax bills.
Then companies bought their own stock instead of investing in labor:
"S&P 500 Buybacks Now Outpace All R&D Spending in the US" (2019) https://news.ycombinator.com/item?id=21762582
People just want the same R&D tax incentives back:
"Tell HN: Help restore the tax deduction for software dev in the US (Section 174)" (2025 (2439 points)) https://news.ycombinator.com/item?id=44226145
How Container Filesystem Works: Building a Docker-Like Container from Scratch
We had chroot since 1979, nobody managed to build a docker like wrapper for chroot which do not require netns?
Docker is a genius idea which looks obvious in retrospect, but someone need to invent it.
Docker is more than just chroot. You also need: overlay file system; OCI registry and community behind it, to create thousands of useful images. And, of course, the whole idea of creating images layer by layer and using immutable images to spawn mutable containers.
I don't actually think that you need network or process isolation. In terms of isolation, chroot is enough for most practical needs. Network and process isolations are nice to have, but they are not essential.
What I always wondered is why qcow2 + qemu never gave rise to a similar system, they support snapshots/backing-files so it should be possible to implement a system similar to docker? Instead what we got is just this terrible libvirt.
Containerd/nerdctl supports a number of snapshotter plugins: Nydus, e Stargz, SOCI: Seekable OCI, fuse-overlayfs;
containerd/stargz-snapshotter: https://github.com/containerd/stargz-snapshotter
containerd/nerdctl//docs/nydus.md: https://github.com/containerd/nerdctl/blob/main/docs/nydus.m... :
nydusify and Check Nydus image: https://github.com/dragonflyoss/nydus/blob/master/docs/nydus... :
> Nydusify provides a checker to validate Nydus image, the checklist includes image manifest, Nydus bootstrap, file metadata, and data consistency in rootfs with the original OCI image. Meanwhile, the checker dumps OCI & Nydus image information to output (default) directory.
nydus: https://github.com/dragonflyoss/nydus
awslabs/soci-snapshotter: https://github.com/awslabs/soci-snapshotter ; lazy start standard OCI images
/? lxc copy on write: https://www.google.com/search?q=lxc+copy+on+write : lxc-copy supports btrfs, zfs, lvm, overlayfs
lxc/incus: "Add OCI image support" https://github.com/lxc/incus/issues/908
opencontainers/image-spec; OCI Image spec: https://github.com/opencontainers/image-spec
opencontainers/distribution-spec; OCI Image distribution spec: https://github.com/opencontainers/distribution-spec
But then in the
opencontainers/runtime-spec//config.md OCI runtime spec TODO bundle config.json there is an example of a config.json https://github.com/opencontainers/runtime-spec/blob/main/con...
The LXC approach is to run systemd in the container.
The quadlet approach is to not run systemd /sbin/init in the container; instead create .container files in /etc/containers/systemd/ (rootful) or ~/.config/containers/systemd/*.container (for rootless) so that the host systemd manages and logs the container processes.
Then realized you said QEMU not LXC.
LXD: https://canonical.com/lxd :
> LXD provides both [QEMU,] KVM-based VMs and system containers based on LXC – that can run a full Linux OS – in a single open source virtualisation platform. LXD has numerous built-in management features, including live migration, snapshots, resource restrictions, projects and profiles, and governs the interaction with various storage and networking options.
From https://documentation.ubuntu.com/lxd/latest/reference/storag... :
> LXD supports the following storage drivers for storing images, instances and custom volumes:
> Btrfs, CephFS, Ceph Object, Ceph RBD, Dell PowerFlex, Pure Storage, HPE Alletra, Directory, LVM, ZFS
You can run Podman or Docker within an LXD host; with or without a backing storage pool. FWIU it's possible for containers in an LXD VM to use BTRFS, ZFS, or KVM storage drivers to create e.g. BTRFS subvolumes instead of running overlayfs within the VM by editing storage.conf.
Show HN: Clean Clode – Clean Messy Terminal Pastes from Claude Code and Codex
I’ve been impressed with Claude Code but one thing that sometimes gets in the way in my workflows is the messy, mangled text that is shown when pasting text from the Claude Code terminal sessions. So I built an open-source utility that cleans extraneous white space, pipes, and other characters from your CC/Codex pastes.
For example, you can turn this:
`How can I create a Claude Code script that │ │ cleans up extraneous characters and cleans up │ │ extra spaces, new lines, and other messiness │
when I copy from Claude Code terminal │ │ prompts or copy code from Claude Responses in │ │ the Claude Code Terminal? It can make it │ │ hard to read, save, and reuse. `
Into this:
`How can I create a Claude Code script that cleans up extraneous characters and cleans up extra spaces, new lines, and other messiness when I copy from Claude Code terminal prompts or copy code from Claude Responses in the Claude Code Terminal? It can make it hard to read, save, and reuse. While this was built with Claude Code in mind it also works on Codex.`
Try it here: https://cleanclode.com
It’s 100% private (no data collection, tracking, completely open-source). If there’s anything you don’t like please just create a GitHub issue, contribute your change (https://github.com/TheJoWo/Clean-Clode), or comment here. Thanks and hope it’s helpful to some of you
From https://github.com/google-gemini/gemini-cli/pull/5342#issuec... :
> Would .ipynb format solve for this? Unfortunately there's not yet a markdown format that includes output cells (likely due to the unusability of base64 encoded binary data). There are existing issues TODO to create a new format for Jupyter notebooks; which have notebook-level metadata, cell-level metadata, input cells, and output cells.
API facades like OpenLLM and model routers like OpenRouter have standard interfaces for many or most LLM inputs and outputs. Tools like Promptfoo, ChainForge, and LocalAI also all have abstractions over many models.
What are the open standards for representing LLM inputs, and outputs?
W3C PROV has prov:Entity, prov:Activity, and prov:Agent for modeling AI provenance: who or what did what when.
LLM evals could be represented in W3C EARL Evaluation and Reporting Language.
From https://news.ycombinator.com/item?id=44934531 :
> simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?*
Here's a script to parse inputs out of Gemini CLI saved chats from before they implemented the /export command:
parsegeminiclisaves.sh: https://github.com/westurner/dotfiles/blob/933e89e7664e58225...
Language models pack billions of concepts into 12k dimensions
I think the author is too focused on the case where all vectors are orthogonal and as a consequence overestimates the amount of error that would be acceptable in practice. The challenge isn't keeping orthogonal vectors almost orthogonal, but keeping the distance ordering between vectors that are far from orthogonal. Even much smaller values of epsilon can give you trouble there.
So the claim that "This research suggests that current embedding dimensions (1,000-20,000) provide more than adequate capacity for representing human knowledge and reasoning." is way too optimistic in my opinion.
I also doubt that all vectors are Orthogonal and/or Independent.
Re: distance metrics and curvilinear spaces and skew coordinates: https://news.ycombinator.com/item?id=41873650 :
> How does the distance metric vary with feature order?
> Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless? [...]
>> Are the [features] described with high-dimensional spaces really all 90° geometrically orthogonal?
> If the features are not statistically independent, I don't think it's likely that they're truly orthogonal; which might not affect the utility of a distance metric that assumes that they are all orthogonal
Which statistical models disclaim that their output is insignificant if used with non-independent features? Naieve Bayes, Linear Regression and Logistic Regression, LDA, PCA, and linear models in general are unreliable with non-independent features.
What are some of the hazards of L1 Lasso and L2 Ridge regularization? What are some of the worst cases with outliers? What does regularization do if applied to non-independent and/or non-orthogonal and/or non-linear data?
Impressive but probably insufficient because [non-orthogonality] cannot be so compressed.
There is also the standing question of whether there can be simultaneous encoding in a fundamental gbit.
Polylaminin promotes regeneration after spinal cord injury (2010)
"3D-Printed Scaffolds Promote Enhanced Spinal Organoid Formation for Use in Spinal Cord Injury" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adhm.20... .. https://news.ycombinator.com/item?id=45141972
Grapevine canes can be converted into plastic-like material that will decompose
I’ve worked for two refining companies. They aren’t about to rebuild their global infrastructure to make this happen…it doesn’t matter what possible, it’s what corporations can buy out politicians and the rich building a society that benefits them.
When you start to look at a lot of technological solutions to problems like environmental pollution, climate change, low-cost energy, healthy food production and distribution, you realize that most of the challenges are not technological in nature, but social and political -- basically human nature (fear and greed).
(This is another reason why the idea that's been floated that "AI" or the near-mythical "AGI" will "solve the world's problems" is fallacy -- unless of course by "solve" it means "make a few companies extremely wealthy at the expense of everyone else".)
when you're inside the machine, it's hard to see how it could work differently
You could have said that about motor cars. That the horse industry wasnt going to give up that easy. Its all about incentives
Having said that, deep sustainability initiatives like this require some forward thinking, and i dont see the public buying into preserving their own future when the reaction to climate protesters is eye rolling and the west and east keep throwing the hot potato of blame to each other rather than trying ti solve the problem.
Ideally, the government would introduce regulations to incentivize this for entities for whome the value proposition would, in the short term, be negative. But i dont know if they'll get their act together to do that. So you might be right
There never existed a "refinery" that produced whatever the equivalent of "50 million barrels of crude oil a day" in horses is. "Big Horse" never existed; it was massively decentralized, even when sold at large annual livestock events.
The 20 year old me would have been so excited about something like this. The 39-year old (ok 40 next month) is more reserved. It is not that I don't think this will be adapted but more like : What needs to happen (government, civic groups whatever economic forces) for companies to adapt this? It's going to be a slow burn for sure if this needs to work at a global scale but the impetus should begin with incentives, sadly.
I couldn’t just leave an upvote because rather than read and agree, I immediately had the identical reaction and then saw your post. I may as well still be reading the order section in the back of my comic books or the gadgets in Popular Science.
I’m grateful the work is being done because it’s essential but no longer have faith in these things being solved in 5, 10, or 20 years.
I think that's beautiful.
you've realized that the problems are not impossible, and it's just a matter of getting people to think about them in the right way.
that's easy. Humans have been getting other humans to think the ways they want since the written word. Nothing is more practiced as a discipline, except perhaps prostitution.
I would kill for this for when I’m buying fresh produce at the shops. Right now I just raw dog the produce into my basket as putting 4 apples into a plastic bag to ease the weighing and transport home seems like a selfish thing to do to the environment, but something that starts to break down soon after that sounds great.
Why don’t you bring plastic bags from home? They are very much reusable, you don’t have to throw them out. They are also quite easy to fold into small shapes and keep on you, or your car, or whatever. I have plastic bags which have endured for literal years. I also decided early on that if I forget to bring a bag, I either do without or have to go back to get one. You start remembering really fast after a few times of forcing yourself to go back.
Another thing you can do is just take a cardboard box from some product in the store. This may depend on country, but where I live the shops leave products on their transport boxes on the shelves. Walking around the store I can usually find one empty box, or maybe one almost empty that I can move the products from into another box for the same product next to it. Then I just take the box and use it to transport my groceries. Stores just throw those boxes out anyway, so they don’t care if you take them (I have asked). At this point it’s a bit of a game for me, to guarantee I always find a box. I have a personal rule never do anything that would make the lives of the workers harder in the process.
I have a cupboard full of bags at home I can reuse. It's right next to my door. Really easy to get to.
75% of the time I forget to take a bag to my car.
As well as all the single use bags (paper and plastic) I bought, I also have jute bags that I got years ago and are still holding up. I like them better as they are bigger and stronger.
Even if I managed to get a bag, the other 75% of the time I forget to take it into the shop and leave it in my car.
Even if I manage all of that, 25% of the time I will end up not having enough bags.
What I would like to see is some kind of deposit system with stronger bags (like my jute bags). Then when I actually remember I can bring them back to the store for someone else to use.
The trick is to always have them where you will need them. I always have one or two in my backpack, in my car, in my luggage when I travel... Their size and weight is almost nothing and the only effort is putting them back after use. Which is where it occasionally fails, sure.
The trick is to not bother, just make sure your bag ends recycled not in the street or in the ocean.
Plastic bags are made of polypropylene, and are garbage.
Plastic for the most part is basically garbage, there are so many types that it’s hard to recycle it. PET and HDPE can be recycled fairly easily if they’re sorted, the rest aren’t really worth it (polypropylene, low density polyethylene, PVC).
The only thing that is almost always economically worth recycling is metal, which is separated from the paper/glass/plastic if you have single stream recycling. Plastic should be burned in a cement kiln or buried in a modern landfill, unless it’s well sorted HDPE or PET.
If you recycle then it's probably just going to a street or ocean somewhere else. Plastic recycling is more or less made up.
Reuse is significantly more effective than recycling. Bothering is something we should indeed do. Though yes, disposing of bags properly is also much superior to just throwing them on the floor.
If you use single stream recycling for this, then this is actively bad. Plastic bags clog the sorting machines, and then get thrown out because (even if labeled) they are usually contaminated.
Would it make sense to keep those bags in your car? Or in some of your pockets even ?
> 75% of the time I forget to take a bag to my car.
Then take a bunch in the other 25%. You can just leave them in the car.
Grab a bundle right now, or whenever you’re at home and remember, and put them next to your keys, your wallet, or hang them on the handle of your door.
> I like them better as they are bigger and stronger.
Sure, use whatever you like. Just don’t let perfect be in the way of good.
> Even if I managed to get a bag, the other 75% of the time I forget to take it into the shop and leave it in my car.
Then go back to your car! It will be mildly annoying the first two times, and the third time it won’t happen. I mentioned exactly that in my comment.
> Even if I manage all of that, 25% of the time I will end up not having enough bags.
Then start bringing more. This isn’t hard. Leave the extras in your car.
Or just use the cardboard box approach I mentioned.
None of your mentioned obstacles is insurmountable. On the contrary, they are all exceedingly trivial to overcome with the tiniest amount of will to do so.
> 75% of the time I forget to take a bag to my car.
I put our reusable ones on the floor in the entrance to the garage and then that reminds me to put them back in the trunk whenever I go to the car for whatever reason. Then I always have them while out.
I've sometimes left them in the car but just excuse myself at the checkout and go fetch them while the groceries are being rung up.
We use a collapsible (plastic) shopping basket/tub-with-handles for wet produce, the stuff the grocery store insists on spraying periodically, and things like tomatoes where we don't care if they get wet. The store clerks are used to it now and prefer it because they don't have to scan through bags and just put the produce back in the basket afterwards.
If you go this route, keep onions and garlic separate. They last longer if they stay dry.
This. HDPE lasts. So reuse it.
Cardboard not so much, but where I live one can just take how many boxes one can haul off various shops and they will just thank you.
You can bring your own, non-plastic bags. I do wonder if maybe some cultures just don't have this and so the deprecation of plastic bags has left everyone quite confused.
It's a very solved problem, has been for centuries probably. You can even get some with little wheels! If you absolutely can't handle the looseness of the fruits amongst your shopping, you could use string nets.
> You can bring your own, non-plastic bags.
For sure. But reusing the plastic bag you already have is cheaper and more environmentally friendly than buying a new cloth bag, yet many people never even think of using the same plastic bag twice. Even if some food juice spills inside, you can quickly rinse it off, hang it, and it’s good as new.
In my original reply I was trying to convey that you can be the laziest, most forgetful person, and still have an easy solution.
We farm trees for paper anyway.
Because I forget them at home most of the time on the way to something else.
Again, force yourself to go back or do without whenever you forget, and you’re going to start remembering really fast.
Additionally, don’t just take them when you know you’ll need them, do it before. Next time you need to leave your house to go somewhere, grab some and put them in your car. Done. Go put some right now next to your wallet or keys or literally on the handle of your house’s door.
Or just use the cardboard box approach I mentioned. You can’t forget to bring what’s already inside the store.
I quit using bags for produce--I just put the produce in my basket or cart and then straight into the checkout bag on my way out of the store.
The exception is small loose produce like snap peas.
Ugh. That's a REALLY bad idea for anything that you don't thoroughly cook.
That's such an American fear.
Wash it (as you should anyways) and you'll be fine ...
Just wash some forever checmicals over the pesticides, that'll do the job. Jokes aside, i raw dog with a quick wash and im yet to have caught covid so it cant be that bad.
I always find it interesting when I visit Italy. The supermarkets there do sell some kind of dissenfectent for produce, and everyone is really strict about using gloves (this was even before COVID). My country has none of that...
It's really weird how some safety regulations differs between countries - sometimes the rules are even the exact opposite like washing eggs before sale in the US vs. EU.
Makes you wonder how much of it is actually based on any kind of rigorous science and how much is done just because someone thought it was a good idea once and now its just how we do things.
Unwashed produce has essentially zero risk of COVID. Other various bacterial contamination, yes, though it's very rare for those to do worse than give you an upset stomach or cause any lasting damage.
As someone who works in a market, eating anything without cooking or washing it first is a bad idea. Most of it is fine, but people are disgusting and there's no way of knowing how many people have touched your apple, if someone's kid managed to lick it without you noticing, or someone managed to push everything off the shelf onto the nonslip mat before they got stopped. Bagged produce can be even worse, given the amount of condensation inside the bag after it warms up on the loading dock and then sits in the cooler. The mister above the fresh greens and such doesn't do much and they regularly get touched and knocked out. The potatoes are probably the best, as the dirt on them is obvious.
If you live in an area with entitled people and spineless corporate rules that don't allow stores to confront people over pets, that's instantly worse than everything else combined. Pets like to lie on the floor, someone's dog has peed on the floor, 5 different random people have petted hugged or picked up that dog and 3 others since they left the house. One of those people is probably a cashier who then handles every item you've bought. And then someone inhaled pet hair and sneezed.
Sorry--I don't understand the risk. Are you concerned about germs? Pesticides? Other?
[deleted]
I hate to break it to you, but the loose produce in store isn't clean. That's why you must wash produce before you eat it.
Any washing you do to the produce at home has basically zero chance of killing/removing anything. It's hygiene theatre. People typically don't wash their produce in bleach or soap.
What are you basing this on? If I buy something like parsley and don't wash it then it tastes a bit like fly killer, but if I rinse it thoroughly then it doesn't. Is that just placebo?
Water is a great solvent! And, I'm sure you could use unscented soap if you wanted to. (I just use water)
Anyway, if water won't wash the food clean, then one may as well not shop at the grocery store.
I’m pretty sure I’d rather remove the dirt from my vegetables, but you do you.
You can also read the studies that show mechanical action (brushing, rubbing) under running water effectively reduces the bacteria count https://www.sciencedirect.com/science/article/pii/S0362028X2...
It’ll never be sterile, but it doesn’t need to be for a healthy human. Probably shouldn’t be either.
That link makes my day and confirms so much I wondered about food cleanliness theater.
It's gross but I tend to leave a tiny bit of dirt on my potatoes. I think it's an emotional callback to your point that it might not be great for our food to be completely sterile.
Well, I use dishsoap
People dramatically over weight how bad plastic is for the environment. The impact of a 10 min car ride = 10,000+ plastic bags of emissions. And in first world countries almost no household plastic ends up in the environment.
Can't imagine this survives napkin scrutiny. A ten mile drive isn't using nearly as much hydrocarbon mass as 10k plastic bags. While most of the plastic hopefully winds up in a landfill, most of the gasoline is water and carbon dioxide by the end. It's tires versus bags. While tires shed, the mass lost in 10min is definitely quite a bit lower than 10k bags or the fraction that escapes the waste pipeline.
30mpg, 10 miles, means two pounds of gasoline, 910grams, knock off or add 100g for ethanol per your preference, a google says about 5grams per bag, so nearly 200 bags.
Nowhere close to 10k, but nontrivial. And, this gets reduced and sometimes outright negated if you reuse the bag. Doesn't mean we shouldn't evaluate if plastic shopping bags are the beat choice though.
I don't think replacing them with store bought doggy poo and cat litter bags is better. It's not a reduction and theres no reuse. If you find yourself discarding them outright, then find an alternative I guess.
Don’t forget that a lot of carbon went in to making the road, the parking (deforestation or other land destruction for those should be considered too), the car itself, emissions from tyre wear, brake dust, some plastic for the single use medical devices necessitated by treatment of people struck by drivers, etc etc.
Though what is often forgotten is the insane amounts of plastic used in farming. Occlusion fabric for weeds, polytunnel skins, silage wrap, etc
I didn't forget, its just awfully hard to fit all that on a napkin.
I can breathe CO2. I don't want plastic in my brain. These two things are not the same.
I think your math is wrong. Most of modern cars do up to 150g of CO2 per 100km, there are other emissions too, but they are in way smaller numbers.
I think the units there are off, a Camry hybrid is about 100g direct CO2 per km. One widely repeated calculation has total direct + indirect emissions for a grocery bag at 200g. So 1km driven vs 1 bag is a similar magnitude of emissions.
Please be careful of such "metrics/statistics." Their very nature means they're politically and financially incentivized lean towards a higher or lower number than "the other guy." And, of course, a big number is scarier in a vacuum. What if a paper bag is 250g of emissions?
The poster child for me for this is low-GWP refrigerants. Sounds good, right? Well, think about how CO2 captured filtered and compressed compares. I'll leave everybody to argue with their-self on this. Does co2 vs r-whatever use more energy? Less? Does it somehow justify the emissions and pollution of manufacture?
My conclusion is... I don't know.
We have enough data to estimate the reasonable range of possibilities and exclude the upthread assertion that a ten minute car ride is similar emissions to 10k plastic bags. A degree of uncertainty need not make us helpless in the face of loud ignorance, that's how we end up giving equal weight in the media to common consensus of professionals in whatever field and political operatives with fringe beliefs but no evidence.
Sorry, I screwed up and misread what you wrote- primarily, a simple "we can do way better than 30mpg." And theres not a lot in the way of wiggle room to debate with any integrity the amount of CO2 burning a set quantity of gas produces. A couple percentage points for NOx and friends and thats it.
I am confused why everybody mentions emissions though. In a discussion on paper/plastic/reusable bags, in a response to a call for napkin math for a claim of "10,000 bags from the fuel needed to get to the store" (essentially the argument made)- CO2 isn't relevant: just the mass of the gas used to get to the store.
I'm not pleased with how this turned out. to be told I'm wrong? That's fine, its the internet. I'm disappointed and alarmed with how badly wrong the suggested corrections are... it's deeply frustrating for me as well.
Thats comically wrong. Human Resting metabolism is on the order of 20grams of CO2/hr.
See: https://www.sciencedirect.com/science/article/pii/S036013232...
As for a kilo of gas per 10 miles- see https://en.m.wikipedia.org/wiki/Gasoline - says 0.71-0.77g/mL, standard conversion table says 3.785L per gallon. (https://www.engineeringtoolbox.com/volume-units-converter-d_...), and finally- since we're comparing burning gas for a car vs using it in plastic: the figure of merit is petroleum usage, not greenhouse gas emission. Technically, plastic and gasoline aren't going to be 1:1. But that's not napkin math anymore unless you're a petroleum engineer/chemist.
Also most of that weight is oxygen. The mass of carbon from the gasoline in an apples to apples comparison to plastic would be much lower.
It doesn't really make sense to be comparing plastic waste to CO2 emissions though. These aren't fungible.
Cars don't only need gasoline to exist and work.
10 miles in a 30pmg vehicle uses only 1/3 of a pound of gasoline, or roughly 150g. So, nowhere close to 10k or even 1k...
150g is only equal to about 1-5 of the reusable bags in CA grocery stores, depending on the store.
I cycle to the supermarket and every bush I pass on the way is full of plastic.
How is that the fault of plastic wrappings and not the fault of people throwing trash onto the side of the road?
> The impact of a 10 min car ride = 10,000+ plastic bags of emissions.
Emissions isn't the main problem with plastics.
But yes, we should also cut down on driving cars, or drive EVs, or take public transport.
>People dramatically over weight how bad plastic is for the environment.
I can only give a: what in the fuck are you talking about?? Modern medicine is literally finding microplastics in men's testes. "People" are dramatically underestimating how completely and utterly screwed the next dozen generations of humanity are with the plastic waste we've blanketed the earth in. Assuming humans survive that long.
Sure plastic aren't great for the environment when we're just dumping it out there without much care. Obviously reducing waste and reusing is what we should strive for on all fronts. Demonizing one thing results in overcompensation on the flip side and we know for a fact that that's not where we want to end up either. Remember when we tried to reduce paper use as much as possible because of deforestation? Or saturated fats?
> Remember when we tried to reduce paper use as much as possible because of deforestation?
No, I don’t. I do remember a push to recycle paper which was a net win for everyone.
> Or saturated fats?
Great counterpoint. Remind me of the benefits of having microplastics in your testes. Which part of that had scientists questioning historical data?
At least microplastics don't make you angry and violent that we can tell.
On the other hand, it's going to be around (relative to pre-emission levels) for a lot longer than the lead (paint gets chipped off and disposed of, we stopped using it in end-consumer products, etc)
There's some concern that microplastics in the brain could contribute to depression, anxiety, executive function disorders, autism, Alzheimer's etc.
As the amount of plastics in the brain increases who knows what it'll do to us.
it's not about CO2 emissions, it's about plastic waste that eventually degrades to microplastics
SAME. It kills me inside when people wrap things like fruits and potatoes in plastic that have natural peel they'll remove before eating anyways
Japan is wild for this, but also pretty good at recycling plastic in general.
Bananas are often wrapped individually for sale. You buy a box of biscuits and they're often individually wrapped in plastic etc.
Most of that plastic can't be recycled so it's probably being burned or thrown away.
Japan incinerates most of their plastic.
Why don't you just buy some re-usable fruit bags?
https://www.target.com/p/lotus-original-reusable-produce-bag...
Bring a cloth bag to put the apples in after checkout.
The plastic bag also prolongs the life of the produce, which is the main reason I want it.
Wasting produce is much worse for the environment than wasting a bag. After all if you don't litter the bag, throwing it out is pretty harmless.
we use these fresh and crisp bags. They sound like a gimmick but they really do work. We reuse a bag for months until its full of holes and not doing its job well anymore.
https://www.woolworths.com.au/shop/productdetails/2824/fresh...
Would be nice to have bags like that with their weight printed on them that machines trust.
Where I live they have scales that tare at the beginning as part of the process of using your own bag.
Do you write down the result? How is the process connected? Smart produce scales log weight => Smart checkout scales compare weight to produce logs?
Write down -what- result.
You put the bag on the scale, it then sets this amount to 0.0
You put the product on the scale, (say 500g of apples), It shows 500g.
You remove the bag, it takes off 4g, you add the bag it puts on 4g.
There is no need to write down the result.
Are bags so heavy that you need to tare them out?
the places around here are using compostable plastic bags. not sure what it's made of but it can be composted in municipal facilities according to the bag. one downside is they are green tinted and harder to see what is in there but if it removes some of the plastic killing the ocean then i'm for it... assuming it's not a plastic that degrades into microplastics.
> it can be composted in municipal facilities according to the bag
Note that "according to the bag" is very different from "according to your municipality"; my understanding is that most places actually can't handle them, and they might need to divert your compost to the landfill if it has too much of those plastic bags. They can be composed under certain conditions, but whether the facility your municipality uses has those is unclear.
See also "flushable" wipes that must not be flushed down the toilet.
> See also "flushable" wipes that must not be flushed down the toilet.
That really should be prosecuted for false advertising. Just because I can physically flush Orbeez down the toilet doesn't mean it's safe to do so.
I'd assume those bags would be okay considering they break down after a few days of holding compostable materials, and frequently make a mess in the compost bin. The "compostable" cutlery is definitely not compostable under normal household situations though.
My understanding is most manicipal compost facilities can handle them - the vast majority of manicipalities don't have a facility at all. They are expensive. A home pile won't compost them, a pile at manicipal size is likely a health hazzard and so not a good option.
Most of these at least in my region are made from cornstarch. They decompose well/without "microplastics" but only under correct conditions.
Home composts aren't usually meeting these, their temperature isn't going high enough for full decomposition and you can have traces of polymers left behind. I throw them in the trash for compostable waste because thankfully my collectivity collects these to generate biogas and my guess is they do end up in much larger/managed composts where they can fully decompose.
I thought it was all PLA:
https://en.wikipedia.org/wiki/Ingeo
I think there's also "biodegradable" plastic which has cornstarch in it which allows bacteria to degrade it, but that's not the same thing?
PLA doesn't actually biodegrade outside of specialist industrial facilities, it was much vaunted as an eco friendly thing when 3d printing started using it, but we rapidly found out it can last decades without breaking down much if at all.
> but if it removes some of the plastic killing the ocean then i'm for it
It doesn't. The plastics in the ocean don't come from your grocery store. They come from fishing gear and from places without municipal trash service.
Honestly? It's basically greenwashing, it doesn't actually do anything at all. No one ever composts this things, and landfilling or incinerating a bag does not harm the environment.
I just threw one of those into my compost pile last month and it’s still there. No clue how long it’s supposed to take.
yeah I mentioned municipal compost because they can get the compost temperature way higher than we can at home scale. It should break down in the big compost piles they have
Ironically i only use the produce bags to wrap raw chicken and beef in an entirely different section.
I’d guess paper would work fine for that purpose, except that it’s harder for the checkout person.
I've been doing that since before anyone cared, it just seems wasteful to use a bag for a handful of things. I use bags if I buy more than a few of something, or if it's something with dirt on like potatoes.
I bring these little net bags from home. They work great to keep the veggies separated.
We have been using linen bags from Rough Linen and have been pretty happy with those.
We've got reusable mesh bags that we use for this.
Paper bags solve that use case much better.
Yeah that’s the problem. Plastic solves a logistics problem, not a structural problem.
Are your Twinkies stuck in a hot truck in Texas for a week? No problem!
It doesn't _only_ solve long-term logistical problems. Plastics are used for things like takeout containers, drink cups and straws, amongst others - things that are only needed for a short time.
All of those need to hold hot and wet things for long enough without contaminating them.
Agree, but I don't see any mention of that in the article, so I don't have enough information to argue for that.
I'm sure we can agree though that having 17-day decomposing plastics that don't contaminate with heat and water is a good thing, so I hope it is that.
Decomposing isn't a binary process where you wait 17 days and then the plastic disappears. Something that decomposes in 17 days will have ~0.25% disintegrate every hour which means there is now contamination in your food. Personally I'd rather not wait for that contamination to be shown to cause health issues.
I’m pretty sure 17 days is far too short for most serious uses.
Who cares. If 50% of the usage is short term stuff like takeout, grocery bags, etc then this wipes out that waste.
What contaminants would result from cellulose-based plastics like in the article? I'd guess probably things that'd at worst make the hot and wet thing taste bad, no?
Is your shipment of drink containers stuck in a hot truck in Texas for a month? No problem! They’re plastic
My point is it doesn't have to be a complete solution to replacing plastic to be able to have some benefits to replacing some plastics.
You can have local manufacturing processes so that it doesn't have to get stuck in a truck in Texas for a month.
And there'll still be uses for the long lived plastics. You don't have to use one plastic for everything - like we don't today.
Building a box that can last for centuries when you're only going to use it for 25 minutes and toss it is pretty wild if you think about it.
Bro I’m not agreeing with it, single use plastics are ridiculous. The failure in replacements continues to be what problems they solve for the supply chain.
Unless you want to eat at Applebees, a small, locally sourced, organic, etc restaurant owner can’t conjure up a supply of biodegradable containers. But your local joint can order 5000 of them and keep them in a back room in less than ideal conditions for a year at minimal costs.
Not saying it’s right, just trying to draw attention to reality
Again, not all replacements need to replace 100% or even 10% of plastic use to be able to have an a positive impact. There's space for a short-life plastic just like there's (currently) reasons for long-life plastics
They used to make it work with waxed paper. There's no reason why that can't be used for a large proportion of food packaging again.
That's just a different non-biodegradable petroleum product: https://en.wikipedia.org/wiki/Paraffin_wax
A lot of food grade wax is carnauba.
I assume that anything sold today as waxed paper has plastic in on it, but I don't really know.
I want my produce wrapped in this plastic not the forever plastic. Maybe the bio-degradable plastic has it's use cases for other special purpose packaging with a very short self life.
I don't know much about this area at all, but it seems like it would be neat to have a plastic that stood up well to heat and moisture, but you could leave it soaking in some petrol/diesel/oil liquid, and it would melt into that and leave you with something still useable.
As I write this, it sounds like I'm just describing something like petrol in a solid form at room temperature. Perhaps there's something a little less far-fetched that people are working towards?
> it sounds like I'm just describing something like petrol in a solid form at room temperature
That's what plastic IS. That's why it sounds like it, because plastic is in fact solid hydrocarbon.
So not only is it not farfetched, it exists today, which is also why incinerating plastic for energy is the best possible way to dispose it. You remove the plastic from the world, you reduce the amount of oil pumped for fuel, and you get to use the oil you do pump, twice! Once for plastic, and again for fuel.
It's one of those environmental slam dunks with zero downsides. (Before you ask: Modern incinerators do not release any toxins from burning plastic, none.)
Polyolefin plastics like
https://en.wikipedia.org/wiki/Polyethylene
and
https://en.wikipedia.org/wiki/Polypropylene
and even
https://en.wikipedia.org/wiki/Polystyrene
are "solid hydrocarbons" but most plastics are more complex than that. One reason we quit burning trash in many places is the presence of
https://en.wikipedia.org/wiki/Polyvinyl_chloride
which produces HCl which eats the incinerator. [1] Sure you can build a chemically tougher incinerator and add lime but practically stripping toxins from incinerators is a function of building a stripper tuned to whatever toxins are expected to be in the particular waste and frequently adding something that reacts with them. You can't really "burn up" heavy metals and certain other poisons and those either go up the stack or are part of the ash that has to be disposed of.
A technology you hear about more than you hear about real implementations is "chemical recycling of plastics" through pyrolysis which implement more or less controlled combustion and captures petrochemical molecules that can be used either for fuel or to make plastics and other chemicals: these manage to capture or consume most of the products but some of the polycyclic aromatic hydrocarbons that are produced when you burn plastic are practically drugs that cause cancer
https://en.wikipedia.org/wiki/Benzo(a)pyrene
[1] Plenty of others contain oxygen: https://en.wikipedia.org/wiki/Polyethylene_terephthalate or nitrogen such: https://en.wikipedia.org/wiki/Acrylonitrile_butadiene_styren... and https://en.wikipedia.org/wiki/Nylon
Most disposable plastic is not PVC. Because Chlorine prolongs the life of the plastic, it's specifically used on things that you don't throw out.
In any case incinerators can handle the chlorine - it's so reactive that it's actually very easy to filter.
> You can't really "burn up" heavy metals
There are no heavy metals in plastic, and very little in consumer waste as a whole.
> are "solid hydrocarbons" but most plastics are more complex than that
But those 3 you listed are the vast majority of the thrown out plastics.
Municipal waste has a large fraction of waste from demolished buildings which includes wood, concrete, bricks, all sorts of stuff. PVC is a significant part of that waste because it is used for siding, floors, etc.
In a consolidated municipal waste stream heavy metals are a concern because they concentrate in the ash which has to be carefully stored. This kind of system
https://en.wikipedia.org/wiki/Plasma_gasification
is supposed to encapsulate heavy metals into slag particles that aren't very mobile and can be incorporated into roads, building aggregates and such but people have struggled to make them work, part of it is that the syngas plant and whatever uses the syngas and cleans up the syngas and/or the products of using the syngas is a chemical factory that depends on the inputs having a certain composition and the composition of a municipal waste stream is not at all constant.
PET is a major thrown-out plastic that's not a hydrocarbon, it's also the most recycled. Polystyrene, funny enough, is easy to chemically recycle but not through pyrolysis, it's the sort of thing you might even demo in a high school chemistry class if styrene wasn't so carcinogenic. It's never caught on because expanded polystyrene is hard to handle, transport and bring back to a chemical factory large enough to efficiently consume.
How is PET not a hydrocarbon (for the purposes of burning it)? It's (C10 H8 O4)n the oxygen makes it not technically a hydrocarbon, but it will burn just fine and cleanly.
Your point about building waste is valid, but I think most of that stuff goes in dumpsters and can be directed to a different wasting handling.
Hah.
We burned shavings/rejects from a polyester-resin+glass boat building.. in a 200L drum.
That was quite smoky and smelly, but still I think better than just shipping it all off for burying in a landfill. And fiberglass decomposed basically into fine sand too.
Environmentally speaking, shipping it off to a landfill would have been orders of magnitude better; burning it released thousands or millions of times more pollution. Most polyester resins are aromatic, so incomplete combustion can produce a wide variety of quite toxic substances.
I guess we did release some. Mostly soot and half-burned hydrocarbons to be decomposed by solar UV. Still, thinking of all this just being buried for like 2e6 years ... that seems even more wrong.
> It's one of those environmental slam dunks with zero downsides
In relation to directly burning oil for fuel, yeah. In relation to other disposal methods, there's still the pretty major downside of being dependent on a non-renewable resource, in addition to…
> (Before you ask: Modern incinerators do not release any toxins from burning plastic, none.)
Greenhouse gas emissions are still an issue, though, no? Or do the incinerators capture that?
If you were going to burn oil for power, and instead you burn used plastic that for power, greenhouse gas emissions from the burning are roughly similar. However, you skip the emissions from oil extraction and transport, assuming the plastic is burned close to its use / collection.
How do these systems handle the extra crap on the plastic?
So do we already do this? And if not, why not?
We sure do, Sweden imports trash (actual trash, not recycling) because it's a huge part of their energy source.
A large amount of plastic recycling is burned, but always in secret, because when people find out they freak out, because they mistakenly think that making some new plastic out of it is somehow better.
Petrol is really quite harsh and includes cancerous chemicals like benzene in sizable quantities. It’s not something you can soak something in and then use to expose to food.
Diesel and other oils tend to be (somewhat) less bad - but there are many oils in food which are nearly identical, and hence anything which breaks down in those situations is likely to breakdown while in food contact too.
[deleted]
Vitis riparia (wild grapevine endemic to the whole eastern side of North America, grows like a weed all over extremely disease resistant and cold hardy) and hybrids with it also produce gum arabic from their spring pruning wounds: https://agresearchmag.ars.usda.gov/2015/dec/grape/
Combined with the high sugars in the fruit and this cellulose things, overall an extremely useful plant.
It is also used as grafted rootstock for Vitis vinifera in majority of vineyards (at least in Europe). BEcause of Phylloxera [0]
[0]: https://en.wikipedia.org/wiki/Phylloxera#Fighting_the_%22phy...
> grapevine
The headline is practically a demonic summoning ritual for the naturalistic fallacy. The article is talking about cellulose. We've had cellulose forever. Cellulose is dirt cheap. We are a post-cellulose-scarcity civilization. Extracting it from grapevines ought to be mocked as our century's version of bringing coal to Newcastle.
There's a reason we don't use cellulose packaging for everything and it has nothing to do with grapes.
Hint: moisture exists in the world. Biodegrading in 17 days usually means that it breaks down a lot sooner in conditions we care about.
> Funding for this research was provided by the U.S. Department of Agriculture's National Institute of Food and Agriculture and the National Science Foundation.
What useful research could we have funded instead?
The argument, which doesn't seem insane, is that this film is useful because it is particularly optically clear and strong, which are not properties I would have expected from cellulose. I agree 17 days is too short, but that seems like an interesting opportunity for future research. I would highlight that the number is 17 days when buried in wet soil, not sitting around on a shelf. Cardboard will break down when buried in wet soil, yet we use it extensively in packaging without issue.
> optically clear and strong, which are not properties I would have expected from cellulose
You never heard of Cellophane? https://en.wikipedia.org/wiki/Cellophane
Cellophane is still used to refer to LDPE grocery bags in former soviet immigrant diaspora
Yeah I know this usage from older people when I was a kid, they referred to any clear thin wrapping as cellophane where to me it was just plastic. My father told me that cigarette packs are kind-of environmentally friendly because they are made up of nothing but paper, tobacco obviously, cellulose acetate for the filters, and cellophane for the wrapper. Recently I got interested into whether they still use cellophane instead of plastic, so I did some * * * science * * * by dunking a wrapper in water (and yes, it did soak up some water) and burning some (it burns cleanly like paper with grey ashes, unlike plastic which stinks and leaves behind hard black tar). So apart from the printing colors, it looks bio-degradable, with the other reservation being that especially the filters will spend a really long time underground before becoming integrated.
Or movie / photographic film?
That's cellulose acetate, though (or, previously, nitrate.) Cellophane is just cellulose. It's like the difference between drinkable ethanol and ethyl-acetate nail polish remover, or between morphine and heroin. Clearly related but significantly different substances.
> which are not properties I would have expected from cellulose
You know why we've lost so much early cinema history to fire and moisture?
Because silent-film-era film is made of cellulose. It burns. Rapidly. Photography pioneers knew that. They used cellulose anyway because it's flexible and transparent. Right technological decision at the time.
We've known about cellulose properties for literally over a century. There's nothing new here.
The article explains why grapevine waste is a concern, and why it’s a particularly effective source of cellulose.
> What useful research could we have funded instead?
This research seems useful enough to me.
> grapevine waste is a concern, and why it’s a particularly effective source of cellulose.
We have markets and prices. If cellulose became scarce enough that the cheapest source for it became agricultural waste, we wouldn't need the government to fund research into an extraction process. Industry would be all over it on its own.
State funding for research is there to solve the problem of industry incentives being aligned against foundational, long term research. What we're looking at here isn't anything like that. It's just one more organic extraction process, another entry in a long tradition of such things.
Another day has come around here since, I've slept over this argument, and I still find it genuinely anti-scientific and smug. As for the coals to Newcastle, did you know that there are steam engines that want to be fed particular types of coal, not any type, to run well (I think I learned that from YouTuber Cruise the Cut)? So there's a point to made that sometimes bringing coals to Newcastle is exactly what you want. Other than that we need much more research in all kinds of cyclic processes that we can utilize to make our activities more sustainable. Right now much too much material is on a one-way trip to the landfill or the incinerator, and how to continue mining and farming is solely left as an exercise to the future reader, with no hind- or foresight, at all. Traditionally people used all kinds of wrappings and containers, many of them suboptimal from a modern POV which we now have replaced with all kinds of plastic which is littering the planet, land and water alike. A solution will not be simple or easy, but if cellulose from grapevine can be part of it, that's probably a good thing.
You know, I'm sure if biodiesel/bioethanol can be a thing, then extracting cellulose from grapevine can make it too. It's just a matter of marketing it correctly ;)
The point is that it’s like finding research into how to acquire air. It’s everywhere - just go collect some. Who needs this?
I think it’s a valid point.
So... what's the reason? :)
We don't have a good mechanism for waterproofing cellulose without various complicated industrial processes. Finding a way to do that would be interesting research.
But anything involving grapeviles is just ecomasturbation.
Actually, no, it's worse, because it robs attention and funding from real problems. Plastic pollution isn't predominately plastic bags or (plastic straws for that matter) that seem important because the sort of person who writes articles on a laptop for online publication encounters them daily and doesn't see the stream of untreated industrial waste mostly from the big rivers in Asia.
IMHO, the best investment in mitigation of plastic pollution would be automatic cleanup mechanisms, especially for microplastics in the ocean.
In fairness, those industrial waste streams are mostly produced by “recycling” facilities for consumer waste.
The whole plastic straw thing is nuts. The old waxed paper straws were fine. The new “paper” straws are coated in PFAS and way worse for your health and the environment than most alternatives.
This article reminds me of that. Cellulose isn’t a new technology, but, like wax paper straws, it’s apparently forgotten arcane knowledge.
It's interesting to me that you think the point of greatest effectiveness is exactly where I'd say realistically all hope is lost, the oceans being so vast of a surface and volume. This is end-of-pipe thinking where I believe we should really start at one of the many points earlier in the process: industrial consumption of materials and industrial waste management are such points, and as you say protection of waterways from pollution. Given how lousy mankind has proven to be when it comes to collecting and effectively re-using plastic waste while avoiding concomitant pollution of water and air and material down-cycling, the real mistake lies in the sheer enormous tonnage-per-year and its growth of plastic. This volume of production should have never happened in the first place. But of course it has so there's a place for ocean cleanup efforts. But to state that "the best investment [...] would be automatic cleanup mechanisms" while denigrating research efforts to produce better plastic-ersatz to me sounds like futuristic techno-boondoggle-babble, not unlike that crazy 'hyperloop' thing. Automatic ocean cleanup robots! Yaay! LA to NY in under 30 minutes! Yaay! Colonies on Mars! Yaa---wait wot?? People can't even cleanup after themselves or avoid throwing their trash into the next river, but no problem, we'll clean that up in no time AUTOMATICALLY?? C'mon give me a break.
The major innovation of this paper seems to be a rayon process that uses less harsh chemicals than the current viscose and lyocell processes.
The UK banned single use plastic bags at major supermarkets. We all moaned about it for a few minutes, forgot our reusable bags a couple of times and then got on with it. Even the small plastic bags you put fruit or pastries in are now gone in a few super markets - initially, they replaced them with transparent paper-based windowed bags, but then I think people realised you really don't need to see inside the bag, and brown paper bags are back.
Yeah, I still don't understand why brown paper bags aren't more standard for everything.
I do see some manufacturers reducing plastic, fortunately. For example, my box of tea bags used to come wrapped in plastic, and now it suddenly doesn't, and I'm wondering why it ever needed plastic. But there's still so much stuff that comes wrapped in plastic, and often multiple layers of it.
Just ban it. There are excellent alternatives.
Brown paper, from recycled fibers are often contaminated with mineral oil residue (e.g. from ink on paper) and other unhealthy chemicals, sadly.
There was a report in Germany, years ago, of a range of organic products that failed during testing. They discovered the packaging (recycled paper) was the issue, not the crops and the supply chain before packaging.
So, a _really_ biodegradable cellulose bag is desirable. Even if only to use it I side a brown bag (to stabilise it).
Road to hell is paved with good intentions... I wonder how many here even notice the most important comment here from you and just keep repeating how plastic bags are worse.
Yes they are terrible, but we shouldn't just blindly replace them with anything and call it a day but do the (continuous) investigation for best solution, poisons are these days everywhere.
Wouldn't the best solution be ensuring they all end up in an appropriate landfill rather than a river?
It seems people are so against landfills that they're happy to sort their plastic and sent it on an epic journey of fraud where it ends up in a river in India. Meanwhile it could have been buried with their other trash and appropriately managed.
IMO most plastics should be incinerated. This reduces the amount of waste that needs to be landfilled immensely and generates electricity as a bonus.
> happy to sort their plastic and sent it on an epic journey of fraud where it ends up in a river in India
It's not like they like this outcome or are even aware of it. We can't blame the individuals who want to do things properly here.
The correct solution to "broken recycling chain" is not "let's not recycle", it's "let's fix the recycling chain".
The issue with non-reusable / non-recyclable stuff is that we have a limited amount of it and is also environmentally expensive.
Even recycling is not ideal. There's waste, and it costs energy. It's in the end not so sustainable.
The best solution to me is reusable bags and containers (washable, and possibly refundable / returnable) whenever possible.
The issue with recycling, as-practiced, is that there's no lifecycle accounting (in many countries, including most of the US).
If we boosted plastic price at point of sale by a recoverable amount, claimable when returning the container for recycling, we'd get higher participation.
Separately, we should also apply the same to the post-return lifecycle: company pays a premium for the material flow, then it rebated that premium upon proof of recycling.
Yes, and same for reusable containers / bags.
if energy is a problem then surely we'd just build global recycling plants at geothermal hotspots? it's not like shipping is a problem. the sense I get is that the main bottleneck with recycling isn't energy, but labour. handling and sorting rubbish properly is tedious and unpleasant and the west doesn't want to spend the money that its workers would expect for it
tangentially--and I'm aware this sounds incredibly stupid, and I'm sure it is--but on the topic of geothermal hotspots, what is the downside of finding some lava/magma source deep, deep underground and just dumping rubbish in there? surely most of the fumes would just be absorbed before they reach the surface? is it just too expensive of an idea/has it been done/is it likely to have undesirable long term side-effects/do we simply not have safe access to such things
>It seems people are so against landfills that they're happy to sort their plastic and sent it on an epic journey of fraud where it ends up in a river in India
See prior comment about road to hell being paved with good intentions.
You have the same problem with plastic. Recycled plastic may not be food safe, and have contamination from whatever it was used for before recycling.
About a year or so ago, somebody in the chain of suppliers of plastic PET bottles for seltzer water, used by several different brands, switched to a recycled plastic with a distinct dark tint to it. Immediately obvious because the product, water, is obviously clear.
My family returned six cases of 15 bottles each to Costco, then found that the other brands at local stores were the same way. A couple of months later the bottles went back to normal. I still wonder if they switched back due to customer rejection of the new plastic, or if they found the new plastic was in some way leeching contaminants.
New plastic doesn’t have that problem and is incredibly cheap.
Take price as a proxy for resource / energy input and see that new plastic is also incredibly lite on inputs.
New plastic may have some off-gassing / contact contamination concerns though.
Last time I checked, energetically we’re better off using plastic over paper or recycled plastic, and burying the waste… if we could do that reliably, which we don’t seem to be able to.
There are several separate problems here.
One is "People don't like bags stuck in the branches of trees and clogging waterways in their parks". Lightweight plastic shopping bags are so thin that a light breeze can pick them up and loft them up into the air easily. They cost approximately nothing - <2 cents retail, significantly less in bulk. It is incredibly expensive by comparison to pay someone to remove them from tree branches and riparian zones - tens of dollars in wages, equipment, and liability insurance. This is a pragmatic reason why municipalities passed bag taxes or bans. Forcing people to use paper or heavier-weight plastic bags that don't blow in the wind, even if they're not in practice "reusable", solves this one. Taxing them 5 cents or 10 cents or 25 cents per bag nudges a high percentage away.
You don't have to make the bags our of recycled paper. You can make them out of new, unbleached paper. Still much better than plastic.
>road to hell is paved with good intentions
At some point there are so many bricks in the road, it's direction is so clear, that the intentions are not longer good. At best they are ignorant, but too often they are self serving malice sailing under the flag of ignorance.
I'm old enough to remember when supermarkets only had brown paper bags. They were weak and the handles tear off easily, and anything cold will make the bag wet and it will fall apart usually from the bottom. Supermarkets must have spent a lot of money replacing customer's broken items when bags failed even before leaving the store.
So when doing the calculus for brown paper bags don't forget to include the cost goods wasted when they fail.
Thankfully we did the full stupid circle quickly enough that the gray hairs in the paper bag industry remembered this and the current generation of bags lacks the handles so people are forced to carry them from the bottom.
Those are not the brown paper bags the GP was referring to. Those fall under the earlier category of "forgot our reusable bags a couple of times and then got on with it". The ones that are left are to replace "small plastic bags you put fruit or pastries in".
Australian supermarkets have excelled at replicating this paper bag fiasco.
The white plastic bags they replaced are magnitudes of order more durable and able to carry, I should test this, at a guess ten times the weight. Basically you can fill a white plastic bag with 1.25 litre water bottles to the extent no more can physically fit in the bag and it will be safe to carry and reuse 50 times.
Fortunately the white plastic bags are still available online (eBay / Amazon / etc) so I just buy 50 for my own use as required and use them till they nearly fall apart then repurpose them as bin liners.
They’re incredibly cheap, don’t really get dirty in an unhygienic way, can be washed if something does spill in them, and they fold up in to almost no space.
> Basically you can fill a white plastic bag with 1.25 litre water bottles to the extent no more can physically fit in the bag and it will be safe to carry and reuse 50 times.
Yeah that's not good, the way they do that is with more plastic in the bags. A single bag weighs as much as 5-10 old timey single use bags.
I’m talking about the old timey bags, they’re still available, just not at the checkout of supermarkets.
I'm old enough to remember when supermarket brown paper bags didn't have handles... Agree with the other commenter that the handles are pointless but the bags work fine if you just ignore them.
Incidentally, given that I'm _not_ old enough to remember a time before supermarkets had plastic bags, either the invention of attaching handles to paper bags took a very long time to migrate to my corner of California, or this comment makes no sense
I'm old enough to remember when supermarkets had boxes. All the goods they sell comes to them in big cardboard boxes, and and supermarkets would have a fenced off area where they dumped all those boxes, so whenever a customer needed a box to put their groceries in to take them home, they'd get a box from that fenced off area.
I haven't seen those in decades unfortunately. It was a great way to reuse those boxes.
I don't recall ever seeing one of those in the person-facing parts of the store, but I've not had issues either asking someone who works there in the store or going around the back of the store where they do loading and unloading and asking there.
The handles on brown paper bags are noob traps. You're supposed to hold the bag against your body with one arm, your hand on the bottom of the bag. They work fine like this. I've walked home totaling hundreds if not thousands of miles (two or three times a week for many years) with paper grocery bags like this and never had issues.
I think banning plastic completely in packaging is a much harder ask, as whether it is needed is rather nuanced (if I understand it correctly). For example, it's perfectly possible to deliver cucumbers to an end customer without them being shrinkwrapped. However, to deliver enough cucumbers to enough customers for a supermarket scale, I understand from several documentaries that plastic is still required in that case. (For those outside the UK, plastic covered cucumber is the social barometer for plastic packaging.) Banning plastic bags was easy and simple, and our laws don't tend to deal with nuance very well...
Interesting thing is, the non-organic cucumbers at my supermarket don't come in plastic, but the organic ones do. I never know which ones to get.
Yeah, this is terrible.
Obviously the people who want to buy organic and the people who want to avoid plastic the most are probably almost the same group. They know this. It feels like "Fuck you environmental-aware buyers" to me.
Of course wrapping everything non-organic is a no go as well, it would be terrible for the environment. And I'm afraid stopping the production of non-organic stuff ain't happening anytime soon.
I believe the real solution if possible until they fix this is to go to a market or an organic store where nothing is in plastic, at least for fruits and vegetables.
> Obviously the people who want to buy organic and the people who want to avoid plastic the most are probably almost the same group. They know this. It feels like "Fuck you environmental-aware buyers" to me.
They're different types of environmental. One is "I don't like pesticides and I have money" and the other is "I don't like eternal plastic waste".
Different things, same group of people (money matters aside - people don't buy because it's more expensive, but despite it), no?
The "I have money" part is obviously unfortunate. Buying healthy and environmentally-friendly shouldn't be conditioned by money. The next best individual thing is voting with one's own wallet in the meantime.
The "I don't like pesticides"¹ and the "I don't like eternal plastic waste" are very compatible. Both pesticides and eternal plastic waste hurt the environment in their own ways.
I suppose the target is the restricted set of people who are interested in organic products for their own individual health and who don't push the reasoning far enough to see that their health depends on the environment being healthy in the long term. Or, people who prefer buying organic food and who will make a compromise.
Do you have a different reading?
¹ we will note that organic doesn't mean "no pesticides", and is broader than just pesticides, but I accept the shortcut.
> (money matters aside - people don't buy because it's more expensive, but despite it), no?
I didn't say people buy it because it's more expensive.
Indeed, but removing the money part of your sentence:
> They're different types of environmental. One is "I don't like pesticides" and the other is "I don't like eternal plastic waste".
Makes its clear that both concerns would come from the same group of people, more or less.
Or not? This is my question to you. Just take my previous comment as "What do you mean, different?".
You have a point with your money thing. Supermarkets absolutely make their choices with individualistic assumptions, taking in account classes of people and their revenues, and I suspect this is how we ended up with this wrapped organic vegetables heresy.
> I suspect this is how we ended up with this wrapped organic vegetables heresy
It could be that things treated without pesticides might require more protection against things attacking them in transit? Who knows.
It would be quite concerning :-)
That would mean that we eat active pesticides when buying non organic food. Not that it would totally surprise me neither.
We do know that organic markets don't need the plastic though. But they might have shorter circuits as well (which is also a good thing).
> That would mean that we eat active pesticides when buying non organic food. Not that it would totally surprise me neither.
Not necessarily. It could be microbes downstream of pests touching your crops that shorten the shelf life, for example.
> But they might have shorter circuits as well (which is also a good thing).
It's a good thing if you have the time and money to buy things that are more expensive to produce.
> It's a good thing if you have the time and money to buy things that are more expensive to produce.
Again with the money! We are looping here. I really don't know what you are trying to defend, but not the same thing as me for sure.
I think I will stop there, we are not having a constructive discussion. You are just opposing random stuff without answering key points.
> Again with the money!
If you claim something is good, it's maybe okay to point out that it might not be good.
Yeah, the issue is of course that the supermarket sells both kinds of cucumbers, and they need to be able to distinguish between organic and non-organic cucumbers, which is why they wrap one type in plastic. And of course it's better for the environment if that's the type they sell the least of.
So every step makes sense, but the end result looks ridiculous. Maybe they can use paper wrappers instead? Or maybe just settle on one type of cucumber.
The way I understand it, without the wrapping a much larger percentage of cucumbers need to be thrown away before ever being sold, due to spoilage. That's not a win for the environment.
> That's not a win for the environment.
How is this calculated? I know that growing a cucumber has an environmental cost but so does producing plastic, delivering it and then using machines to shrink-wrap every cucumber.
This study, for instance, [1] looks at CO2 emissions. Which may be a somewhat limited view, but the effect is rather large: adding 5 wrappers around a cucumber (4 of which being useless) would result in about the same CO2 usage as adding no wrapper. And that's not even considering spoilage after the cucumber has been bought by a consumer.
[1] https://www.frontiersin.org/journals/sustainable-food-system...
CO2 usage, I get you, but what about the plastic waste?
Brown paper bags were the standard grocery store bag up until sometime in the 1980s. The transition to plastic was pushed by environmentalists with a "save the trees" message focused on how many trees were used to make the paper bags.
Can you back that last claim up?
The push was from plastic manufacturers.
https://www.unep.org/news-and-stories/story/birth-ban-histor...
Not really. It's just my memory of the times. "Save the trees" was a very big thing for a while, in arguments for avoiding paper and cardboard packaging.
A lot of so-called environnemental awareness campaigns are the work of trade organizations or multinationals. Often they shift the blame onto the consumer, so for example it's "you need to recycle" instead of "we need to produce less".
I agree getting rid of plastic bags is a net win for the reasons discussed.
But I can't take the brown paper bag thing seriously! They are a UX nightmare in my workflows. Carry one bag per trip in multiple trips (Instead of ~4 I can do with reusable or plastic). Or try the handled ones which tear off end up with groceries all over. Reusable bags are nice though.
They break in a lot of use cases.
What you see in a lot of places that have people heavily relying on things like delivery services, is people using the reusable bags like they would use single-use bags - so now you have spent even more resources on a bag that's still being used as single-use. Oops.
> Yeah, I still don't understand why brown paper bags aren't more standard for everything
Because plastic is cheaper. As I understand it it's often got a negative cost to it, the companies are paid to take it and use it.
Negative cost? Why? It still needs to be produced and transported, right? I don't understand the business case behind this.
The way it was explained to me isv that there are so many plastic feedstocks produced by fuel production that it's often most efficient to pay someone just to take it away.
That's basically the economic equivalent of having to pay to get rid of a fallen tree despite that tree then going on to be chipped and sold in bulk to whatever the nearest local place buying chips is.
The feed stock is basically worth nothing, it's the labor and energy investment that you add to it at every step that adds the value.
Unfortunately, all the actual tea bags are usually plastic. The wrapping is probably a small percentage of the plastic in this product.
I'm pretty sure my tea bags are paper, and have always been paper. It's the more recent "pyramid" shaped tea bags that I think are made of plastic. The most recent change to my tea bags was to remove the staple so they could go in organic waste.
You'd be surprised how paper-like the plastic bags appear to be.
Could try burning a tiny piece and check how it behaves and smells.
I doubt the advice would be to throw them in the organic waste if it was plastic.
Some plastics can go in the organic waste bins, such as the organic waste bin bags.
While they can go in the organic waste bins, they still get sorted out at the end because they don't degrade fast enough.
Study from Australia: https://www.sciencedirect.com/science/article/pii/S0956053X2... Article from California: https://www.siliconvalley.com/2024/11/21/when-compostables-a... German Trash Company: https://www.zakb.de/keine-fremdstoffe-im-bioabfall
Sure and it might be that the teabags are also being sorted out.
Why can't staples go in organic waste? They go into my compost pile and will rust. Iron is like 5% of average crustal rocks and is abundant in soils.
Teapigs pyramids are made of cornstarch
I doubt that's made any kind of environmental/ecological impact at all. The cheap, flimsy plastic carrier bags contain orders of magnitude less material than the reusable kind, and had a second life as a bin liner. Now I need to buy bin liners, which are usually made out of sturdier plastic on top of having to get a reusable bag.
Most of the plastic involved in getting food from farm to home isn't the carrier bag or even the food wrappers. It's the massive amount of plastic that pallets of goods are wrapped in for shipping, which happens several times throughout the supply chain.
We should focus on the latter, instead of the former. Pretty much all we're doing is virtue signalling and maybe hoping that it'll make a tiny difference.
Heck, even a marginal improvement in fuel efficiency of trucks delivering to grocery stores would probably do more than these plastic bag shennanigans.
Where I live there are entire fields of crops grown under plastic sheeting, and I do not mean reusable plastic greenhouses, I mean sheeting pegged to the ground. And then the produce is boxed up in plastic, stuck on a palette, wrapped in plastic and delivered to the supermarkets.
Then, when I'm in town I see building projects where the entire building is wrapped in plastic sheeting: eight story buildings wrapped like a parcel in plastic. Even the ground-level hoarding that used to plywood boards is now typically covered in plastic sheeting printed with branding.
And the roadworks: what used to be reusable metal signs and barriers have recently switched to plastic signs and plastic barriers. I get these get battered and broken quickly but at least the steel ones would typically get melted down and reused at their end-of-life. I imagine the plastic ones just end up in landfill or incinerated.
It does kinda make my home recycling efforts seems futile when commercial enterprises are moving in the opposite direction towards more plastic.
There are significantly less palettes being delivered and handled by significantly less people, thus it is far easier to ensure that the plastic used in the delivery process is disposed of properly. Whereas with the abundance of cheap plastic bags that are available on tap to the masses, disposal turns into a mess. I generally agree with you that we should focus on the whole chain and there's lots of easy wins to be had, but decreasing the amount of plastic that gets stuck in trees or otherwise lost in the *environment* is still a good thing.
I fear that this sort of focus on individual actions has made a lot of people rather upset (e.g. the plastic straws debacle) for very little gains. And I worry that it might backfire.
I'd like to address your last point because there's another thread about the larger amounts of plastic being easier to police:
Not all pollution is fungible.
Greenhouse gas emissions and the microplastic epidemic are two related, but separate issues.
There is no amount of fuel efficiency that would stop a plastic bag from blowing into a stream or tree and shedding microplastics as it breaks down.
We bring tupperwares when buying groceries for the meat, ham, cheese, fish etc and even if the cynic might say it's just a "feel good" action, well, I still put a lot of plastics in our recycle bin but we halved it since we started doing that (and some other trick). Yes, it definitely feels good.
I don't think containers that come from the customer are allowed behind the counter here for food safety reasons.
Here they put a sheet (that they would use anyway) on the scale to separate it physically from the container, and what enters the container doesn't leave it (or goes to the trash).
doesn't it add significant weight to the price ?
They simply adjust the tare weight in the scale and that's it. They do it anyway with their own 1-use plastic boxes or sheets.
I've never seen a meat counter employee re-zero the scale after putting a container or wrap on it. I do occasionally see them reading -0.1oz or something initially, presumably to avoid the need for the employees to zero it out each time they weigh something.
My feeling is that they have a few presets available for the containers they use, but it totally can be changed to a custom one.
> The UK banned single use plastic bags at major supermarkets. We all moaned about it for a few minutes, forgot our reusable bags a couple of times and then got on with it.
I hope you're right. Here in Norway, the sensible people did what you describe. A large minority has, on the other hand, turned the lack of plastic bags (and straws, which I'm sure they barely even used once in a blue moon before) into a battlefield of the culture wars. And far right politicans of course cater to them. They manage to capture discourse talking about "environmentalism gone wild" and "EU overreach". It's terribly annoying and they manage to waste everyone's time and derail important debates with this nonsense.
Funnily you can still buy packs of plastic straws, just that they are sold as "resuable" with a cleaning brush (which noone likely ever uses). They are simply not the default option now and that's enough to make some people rage.
These days, you never hear about reduce, reuse, recycle, and how its supposed to be in that priority order. When i was a kid thats what we were taught. Now its just recycle, recycle, recycle
My conspiracy theory is corporate propaganda changed it because reduce and reuse decreases demand, while recycle potentially only lowers production cost
I highly recommend the documentary Plastic Wars (Frontline). It’s about how the plastics industry made a major marketing push for recycling starting in the 80s, in order to avoid plastic bans and ensure production continued to increase. It shifted the burden of plastic waste from producers to consumers, and we are essentially still in that conceptual space (at least in the US).
For sure. Plastic packaging keeps the product fresh and hermetically sealed from the clean factory / production depot to your store and eventual home. Get rid of plastic and there will be a LOT more spoilage.
Maybe that's an acceptable tradeoff, but most people don't even realize there is a tradeoff being made...
The goal of green-washing is to keep 'unlimited' growth.
Since all our local markets have introduced handheld scanners, I don't even bring my bags in. I put everything in the cart barcode up, get to the checkout, scan everything, pay, and go.
When I get to the car I unload into the bags. I'm sure it's not a thing for everyone, but I feel like I'm cutting out a fair bit of shuffling.
Plastic bags are making a comeback.
https://www.independent.co.uk/news/uk/home-news/plastic-bags...
For fruits and vegetables - Why is a bag needed at all ?
I just put my fruits and vegetables directly on the conveyor belt.
For most you don't. But if you buy loose cherry tomatoes, having 30 of them rolling around everywhere isn't exactly practical.
Thats easily solved though by simple buying some reusable fruit/veggie net. Essentially the same as what you would use for socks or underwear in the laundry.
[dead]
I already use cellulose based bags for my compost waste, and they only stay reliable for about 3 days of usage after something is put in them. This makes them a huge pain to use. I think they also degrade quite a bit (i.e. shorter lifespan in use) after just a few months because each new roll of bags seems better at the beginning.
I found that using bags for compost isn’t really necessary at all. I just dump the container out each night and clean it along with my dishes. It’s nice this way because then nothing is ever actually rotting in my indoor trash.
Having a stainless steel compost container helps with this, as it’s easier to clean and doesn’t retain odors like the plastic bins.
I put my bags + compost in the fridge freezer, which prevents smell and also prevents the bag from biodegrading before I can take it out.
I recommend this approach in general.
3 days are more than enough
This is a novel material with a set of properties and a production "story" that looks rather cool - recycled vines.
If those parameters meet the requirements for a material that you need to use then cool. Use it. I don't see any attributes in this article, which is fine but "stronger than ..." is a bit weak.
The biodegradeable thing is probably going to be key if this stuff can hold hot liquids without poisoning the imbiber or can make plackey bags without falling to bits within seconds.
they linked to the study... https://pubs.rsc.org/en/content/articlelanding/2025/fb/d5fb0...
> These films exhibit a transparency of 83.70–84.30% mm−1 and a tensile strength of 15.42–18.20 MPa. They biodegrade within 17 days in soil at 24% moisture content. These films demonstrate outstanding potential for food packaging applications. Our research approach of repurposing agricultural byproducts to create high-value products helps reduce plastic waste, conserve the environment, and provide economic benefits to farmers.
on the lower end of plastics but might be fine for this application: https://www.researchgate.net/figure/3-Tensile-strength-and-i...
seems comparable to LDPE which i think the common bags are made from.
Some people are skeptical about biodegradable materials, but honestly, ten years ago nobody believed paper straws would catch on either. I think if we can turn leftover plant waste like grapevines into something useful, there's a real chance to start phasing out throwaway plastics in the kinds of products we only use once and forget.
Paper straws are horrible, I don't think they should be chosen as a counterexample to skepticism about biodegradable materials.
Agree. Give me a metal one or design the cups so that I don't need one at all.
You can buy reusable metal straws online or from a variety of retailers today, and you can remove (or refuse) the lid to a cup, at which point it can be used without a straw.
Another option would be to buy canned beverages rather than fountain drinks.
Canned beverages also include a plastic lining.
That's true, and as far as my quick research has gone, the plastic is burned off during the smelting process of aluminum recycling.
Because of the small amount of plastic in each can, and the high heat of the smelting process, odds are good the thin plastic liner will be almost fully combusted, which should greatly reduce the amount of microplastics.[1]
1: https://www.sciencedirect.com/science/article/abs/pii/S03043...
I can't belive that people have been fooled by the environmental credentials of "paper" straws.
It will require someone like LEGO (others) to fully adopt it and prove its effectiveness, and for the governments to mandate its usage while providing incentives for adoption. However, the plastics industry will likely resist this change strongly. There’s also the issue of monoculture, the ideal is reduction (such as re-use durable cotton bags) — I wonder if plastics bags disappeared overnight wouldn’t people adapt? Probably a few extra trips to the supermarket at the start but shortly after a reusable container wouldn’t be forgotten. There’s more to plastics pollution than plastic bags though, water bottles, fast fashion and synthetic fibers, etc.
Ever since energy conservation and environmental protection became a focus, supermarkets have started charging for plastic bags. But I think relying on this kind of approach to reduce usage does not really solve the root problem. Instead of using penalties, we should be thinking of practical and eco friendly alternatives that make people genuinely want to change their habits.
Penalties are a key part of the picture though. They help cover the otherwise unpriced negative externalities.
Why is it that we read about so many inventions like this - once - and then never hear of them again?
Most countries have to import plastic along with their oil. Surely the economics of this gets worse every time oil or shipping prices rise. And more so if you account the cost of waste disposal.
There are economic incentives to scaling up these biodegradable alternatives. Are they not big enough to result in a push?
Google says global oil production is 90 to 95 million barrels per day.
That is a lot of grapevines, grapevines grow slowly, and growing grapes is the best way to use grapevines.
We read about technologies like this because science grad students have to do something, grad schools have low standards for useful work, and universities employ a lot of press release writers.
Per TFA, this is a highly manual one off process with a not-well-scaled resource - a specific type of vine cutting that can only be harvested every other year without affecting the overall vibe health.
So Id wager it's the brutal road from proof of concept to scaled production.
That's a kind way to put it. I'd call it a shit idea that produced a paper for someone's CV.
Do the same with bamboo, HFCS waste, or something similar in availability? Now you're talking!
What about paper bags? In the UK retailers have to charge for single use plastic bags. Clothes retailers hand out strong paper bags for free, and charge for plastic.
Supermarkets charge for plastic bags. Paper bags for fruit and veg work well. They also provide quality reusable bags that cost a small amount (£1 or so), and people actually reuse them.
Alternatives to oil based plastics have been developed for decades, sometimes with oil industries support. But astonishingly enough, we still burn oil for making unhealthy and unsustainable containers. I wonder what is that force that is pushing us backwards every time we try to tame oil industries.
Oh I got it: corruption!
As an owner of 30 trees, mainly oak trees, why the heck don't we do this with leaves...? I throw away 3 bins FULL of leaves every week and I can't even keep up. They drop leaves year round.
Do we have enough grapevines to replace all plastic in the world? If we need to destroy ecosystems to plant more grapevines, forget about it.
[deleted]
It's some tough stuff, I had to cut up a bunch once for my smoker, very strange type of wood, extremely stringy.
Amazing! They discovered cellophane!
How flammable are cellulose products? Would that be a concern for this sort of packaging?
Well, given that almost all products are (currently) ultimately containerized in cardboard, I'd say not much changes. From bags of apples in fruit boxes to your hamburger in plasticized paper, inside a pure paper bag, it's all flammable.
That is neat, but not breaking down quickly is why we use it so often and why we find it so useful. We already have and use a ton of cellophane, but stores and producers avoid it in favor of plastic because plastic doesn't meaningfully degrade in the store or warehouse even if climate control conditions are shitty.
you could innovate to zero emissions but if the culture is hostile to it or angrily doesn't give a crap because 'culture' - then its worthless
Isn’t this cellophane?
It's a miraculous thing corporations have done convincing us that we're the ones polluting the environment.
We do pollute. They are worse, but we still do it. And sometimes change needs to start from where you are at, not waiting for someone else to go first.
Relevant fallacy: https://rationalwiki.org/wiki/Not_as_bad_as
I grapple with this all the time. my wife is very eco-conscious and will scrub out a deeply moldy glass jar just to recycle it (whether the recycling system works is a separate issue here). On one hand there is some truth to the fact that if we all just work together to do the right thing the world is a much better place to live in. Sometimes i don't want to do this (scrub gross shit out) because i'm lazy, other times it feels futile. or maybe its just that the latter is a good excuse to be lazy.
I'd argue that even thinking about the idea of recycling and eco-conscious behavior is something only the already wealthy (with respect to the rest of the world) can do. There are plenty of developing nations where consumption and pollution run rampant and unchecked and unregulated which do tons more damage than me throwing 1 glass jar into a semi well managed landfill.
I mean theres just so many facets to this - does recycle work, does collective action work, or are corporations the real devils here doing much more bad than the collective at large?
i feel that the only way to change anything is through government level policy (which also feels futile), but individual actions do little without policy+propoganda to disseminate the right message and change collective behavior.
Developing nations generally leapfrog by adopting the latest generation of developed world tech.
Imagine people saying they didn't want to adopt mobile phones because developing nations didn't have traditional telephones yet.
This applies to both green tech and to green regulations. They'll look to the EU and China for that as the US is going this one alone again. China recycles 30% of its plastic compared with 12% in the US. Presumably they look at it as an engineering problem to solve and not a fake culture war to protect the oil industry.
Slightly older data here but the trend and the major outlier of the US visible here:
https://ourworldindata.org/grapher/share-plastic-waste-recyc...
> I'd argue that even thinking about the idea of recycling and eco-conscious behavior is something only the already wealthy (with respect to the rest of the world) can do.
On the other hand, growing poor behind Iron Curtain, thinking about not recycling glass jars was crazy.
The thing is wealthy societies just buy things. We were not only washing those jars but re-filling as well with what we have produced.
And I think same goes when one is 'eco-conscious'. Recycle sure, but buy less.
If you have a dishwasher that will get the jar plenty clean to be recycled and not smell up your house while it's waiting to be taken out.
Corporations don't do things that people don't want to pay for.
The entire purpose of their existence is to provide products to customers that want them.
The miraculous thing is people eschewing responsibility by putting blaming the person selling products to the people that want them.
If ot weren't for all those drug dealers, we wouldn't have any addicts.
Your explanation assumes that 1) people have full knowledge of everything corporations do and 2) corporations aren't hiding what they do.
Corporations actively use addiction and psychological manipulation. They're not just passively filling consumer wants.
Your drug dealer analogy actually proves the opposite: we hold dealers responsible precisely because we recognize supply drives addiction. That's exactly why we have laws against dealing rather than just treating addiction as purely a demand-side problem. By your analogy, drug dealing should be legal because it gives the people what they want.
> Corporations actively use addiction and psychological manipulation. They're not just passively filling consumer wants.
Are you suggesting people have a plastic bag addiction? What exactly are the plastic bag manufacturers doing that is unethical? Let's use real examples instead of vague accusations. I'm not going to start with your assumptions that corporations are all evil and are definitely doing bad stuff so you're going to need to cite examples about this specific case.
> By your analogy, drug dealing should be legal because it gives the people what they want.
How much of the harm of drugs comes from the illegality of the market? What of the drugs that are legal, why aren't they so harmful? There's a great case study about the effects of black markets when the US banned alcohol, caused a massive surge in organized crime, then reversed the ban and solved the problem they created.
Drugs cause harm. So do cars, so do plastic bags, so do knives, so do guns, etc. Harm to users/consumers sometimes a good reason, sometimes not, to make things illegal.
Do you work for a plastic bag company?
The American Progressive Bag Alliance (representing Novolex, Hilex Poly, Superbag, and Advance Polybag) has:
- Spent $6+ million fighting California's bag ban through misleading ballot measures https://resource-recycling.com/plastics/2016/08/17/plastic-b...
- Funded studies claiming reusable bags harbor dangerous bacteria (while omitting that washing eliminates this) https://archive.is/p7Qza
- Sued cities implementing bag bans, forcing expensive legal defenses https://www.politico.com/news/2020/01/20/plastic-bags-have-l...
> Spent $6+ million fighting California's bag ban through misleading ballot measures
"Misleading". 6m is hardly anything in a CA election. The article even agrees, “[The $6.1 million spent by manufacturers] is big money, but in California, for 17 ballot measures, it’s essentially petty cash,” he said.
How much evil money did the ban supporters spend? It's funny how negative you phrase the actions taken by the side you don't support.
And yet the ban exists.
> Funded studies claiming reusable bags harbor dangerous bacteria (while omitting that washing eliminates this)
I know lots of people that use re-useable cloth bags that don't wash them. How many studies did the environmental lobbies spend proving that plastic is horrible, ignoring the fact that most retailers simply replace cheap plastic bags with heavy duty "re-useable" plastic?
> Sued cities implementing bag bans, forcing expensive legal defenses
Nobody should sue cities now? That's pretty rich considering how often governments are sued by environmental groups.
> The entire purpose of their existence is
to make money.
Customers wanting or not the product is only one of the path to that. Aligning with competitors to avoid profit reducing change to the market is one way to optimize for money while giving the middle finger to customers.
> people eschewing responsibility by putting blaming the person selling
Eschewing the responsibility of companies with money flow the size of a small nation, crazy marketing budgets, plenty access to lobbying and political power at an international level is way worse in my book.
> Corporations don't do things that people don't want to pay for
Have you heard about lobbies and the billion of dollars companies spend in advertising targeting everyone from the moment their mom shits them out in the world?
Are people born wanting an iPhone 98 Max S pro and a Ford mustang gt5000 7.0 ultimate? I doubt it, but they sure are influenced by comics/movies/ads 24/7 into wanting them.
Do you think the average Joe stands a chance again zuck and his friends hiring the top behavioral scientists and paying the 1m a year to make sure their ad delivery platform are addictive as possible?
Agreed with first sentence ( and only). That's why the state must legislate and fiscalize rules that benefit the population.
I don't even condemn businesses (too much). For a single business to be more eco friendly it must raise costs anf lose competitiveness. For a state to mandate these stuff, all businesses will be on the same level - and they'll have to compete for practical or cheaoet ways to be eco friendly.
It's the tragedy of the Commons, and the only way to win is to enforce rules for everyone.
Every person I know that works "back of the house" says the amount of plastic that you don't even see as a consumer is at least 10x of the final consumer packaging
I've been down this road before, and been brutally downvoted, but I'll say it again:
- corporations are responsible for creating products which can be recycled;
- the consumer is responsible for proper disposal of their waste, and also for electing officials who have actual policies on reducing or eliminating pollution;
- local government is responsible for setting up recycling centers, and for enforcing correct behavior in consumers.
The consumer is at the bottom of all this, directly responsible for polluting the environment.
Oft-stated opinions like yours are lazy and ignorant.
It’s not a radical thought to hold corporations accountable after they have limited our choices and controlled markets. So many things most Americans buy are manufactured needs so built into the culture that we think we need it. Proctor and gamble have written books about strategy that synthesize a market.
ok, but I find this simplification to be what is lazy. Obviously the world isn't as clear cut as only those 3 groups as if they're not intertwined.
And what does "directly responsible for polluting the environment" even mean? If I pay someone to take my trash out and throw it in the ocean am I all of a sudden excempt because I'm not the one "directly" polluting?
Pollution comes from a complex system so it has to be solved as such. Blaming individual participants (specially the ones with less money and power) is reducing the responsability of the rest which is the perfect excuse to do nothing
I'm furious because I see a non-stop behavior of consumers dumping their garbage, either in nature, or in municipal waste sites next to bins because they're too goddamned lazy to literally lift up their arms - let alone not sorting garbage for recycling.
And this is in a major city in middle Europe, one of the centers of "civilization".
If it's this bad here, what must it be like in countries with less developed social and economic systems?
This is the core of consumer responsibility, and it's a dismal failure.
A very good article
So, celluloid?
Now just ship it before oil industry wakes up and lobbies this to death
I'm skeptical that new materials like this will meaningfully drive down the demand for virgin plastic packaging. The problem is not just the absence of good alternatives; it's the fact that plastic is the fossil fuel industry's backup plan for the global transition to cleaner energy sources.
That is: in preparation for a decrease in global demand for energy from fossil fuels, the industry is ramping up production of plastic to compensate so that it can maintain profitability (instead of, you know, just slowing down the extractive capitalism). Plastic production is set to triple over the next few decades as new facilities are built to support this transition.
(Source: Paraphrasing from my vague recollection of A Poison Like No Other by Matt Simon, and also articles like this one https://www.ecowatch.com/plastic-production-pollution-foreca...)
[dead]
This is why I'm constantly asking: why aren't we planting vineyards in the Wasatch Front? Silicon Slopes didn't work out but can we at least farm some effing grapes?
I don’t know SLC very well but I’d guess it’s a combination of water consumption, and a bad value:land ratio because the wine won’t be good.
I don't think there are good or bad wine growing regions as much as there are places where people have figured out how to make good wines. The Finger Lakes had a bad reputation once but people figured out Rieslings and some more affordable whites that reputation changed. More recently it was famous for soda-pop sweet wines like Red Cat but I've had some dry reds lately that weren't as bad as what I had 20 years ago.
People are making progress in Utah too
It was a rhetorical question.
Despite there being many great breweries in that region, most people shy away (initially) from a state run by a prohibition-style religion. Probably illogical, but definitely real in my experience.
Here in Oregon, vineyards and especially hop yards are being taken out, demand for alcohol overall is down, and same goes for the related tourism.
Biodegrades into what? Microplastics?
If I’m not mistaken this is ecologically basically a paper bag that looks like a plastic bag. Remember when we all switched from paper bags to plastic bags to save the environment? The environmental issue isn’t plastic bags, it’s that you don’t reuse them.
Nope, and I'm 60. AFAICT we switched because plastic bags were far stronger, and didn't fail immediately if they got damp.
Right. Plastic bags could be re-used, even after getting damp and because they wouldn't rip. So industry switched to them to be more environmentally friendly, and then after a while, no one actually re-used the bags, making them perhaps better in theory but ultimately worse in practice. I don't have the life experience you have but ChatGPT tells me this was a major reason.
Great, just what we needed as companies are pushing even more aggressively for planned obsolescence. "Biodegradable" just means "self-destructs automatically so we can keep selling you more".
Graphene just broke a fundamental law of physics
"Universality in quantum critical flow of charge and heat in ultraclean graphene" (2025) https://www.nature.com/articles/s41567-025-02972-z :
> Abstract: [...] Here we have discerned the quantum critical universality in graphene transport by combining the electrical and thermal conductivities in very high-quality devices close to the Dirac point. We find that they are inversely related, as expected from relativistic hydrodynamics, and the characteristic conductivity converges to a quantized value. We also observe a giant violation of the Wiedemann–Franz law, where the Lorentz number exceeds the semiclassical value by more than 200 times close to the Dirac point at low temperatures. At high temperatures, the effective dynamic viscosity to entropy density ratio close to the Dirac point in the cleanest devices approaches that of a minimally viscous quantum fluid within a factor of four.
Wikipedia lists some limitations of the Wiedemann–Franz law[1], and also some previous violations in other materials.
Reading the Wikipedia page I don't get the sense the law is quite as fundamental as the headline and summary make it sound like.
Here's one of the previous violations:
In 2011, N. Wakeham et al. found that the ratio of the thermal and electrical Hall conductivities in the metallic phase of quasi-one-dimensional lithium molybdenum purple bronze Li0.9Mo6O17 diverges with decreasing temperature, reaching a value five orders of magnitude larger than that found in conventional metals obeying the Wiedemann–Franz law. This due to spin-charge separation and it behaving as a Luttinger liquid.
Still, graphene is cool and seems to be the gift that keeps on giving in terms of surprising results in solid state physics.
[1]: https://en.wikipedia.org/wiki/Wiedemann%E2%80%93Franz_law#Li...
That twisted SWCNT store energy basically without heat loss is incredibly under capitalized.
/?hnlog graphene (345 references), vortices (70 references) .. westurner.github.io/hnlog/
That's it, I'm writing a tool to parse this for citations
From "Single atom defect in 2D material can hold quantum information at room temp" (2024) https://news.ycombinator.com/item?id=40478219 :
> - "Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. "Electron Vortices in Graphene Detected" https://news.ycombinator.com/item?id=40360691
>> re: the fractional quantum hall effect, and decoherence: How are spin currents and vorticity in electron vortices related?
> [...] But the Standard Model Lagrangian doesn't describe n-body gravity, n-body quantum gravity, photons in Bose-Einstein Condensates; liquid light in superfluids and superconductors, black hole thermodynamics and external or internal topology, unreversibility or not, or even fluids with vortices or curl that certainly affect particles interacting in multiple fields.
This is probably wrong if these are also true:
This says that the standard model actually does describe the n-body orbits of the planets:
"Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 .. https://news.ycombinator.com/item?id=45220460
There's also this:
"Fluid vacuum yields exact solutions to Pioneer anomaly and Mercury's perihelion (2019)" https://news.ycombinator.com/item?id=45220585
How difficult is it to make clean graphene?
There are a few methods to make rhombohedral graphene (which demonstrates superconductivity at room temperature).
Normal carbon stacks into a hexagonal ABAB pattern.
For superconductivity, the layers need to be at least ABC (because twisted bilayer graphene does not demonstrate the effects (superconductivity, quantum hall effect) at room temperature FWIU).
Current process: CVD chemical vapor decomposition and then sort and stack graphene flakes.
Flash heating plastic yields graphene and hydrogen; but you must capture the flue.
There are new plastic recycling methods that intentionally don't produce graphene that maybe could produce more plastic and graphene.
But graphene is hazardous sort of like coal ash; so IIUC if you can make graphene onsite (e.g. from unsorted 'recycled' plastics) and lock it in to glass or another substrate that avoids transport risks.
Perihelion precession of planetary orbits solved from quantum field theory
Do I understand that they are claiming that they have found a way to explain General Relativity from quantum field theory. Are all other effects of General Relativity also explained by this quantum field theory. If not, it seems contradict General Relativity, if it only explains some of the effects that are predicted (and measured) by General Relativity.
From "Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://arxiv.org/abs/2506.14447 :
> Abstract: [...] We derive the perihelion precession of planetary orbits using quantum field theory extending the Standard Model to include gravity. Modeling the gravitational bound state of an electron via the Dirac equation of unified gravity [Rep. Prog. Phys. 88, 057802 (2025)], and taking the classical planetary state limit, we obtain orbital dynamics exhibiting a precession in agreement with general relativity. This demonstrates that key general relativistic effects in planetary motion can emerge directly from quantum field theory without invoking the geometric framework of general relativity.
Gravity of n-body planets from QFT, but not what else?
Where doesn't a QFT-extended or SQR or SQG or other Alternative Theory to GR correspond to real observations or to GR?
/? Testing of alternatives to general relativity:
- https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
- https://news.ycombinator.com/item?id=43310933
Re: gravity: https://news.ycombinator.com/item?id=37968618
Anyways, what's what here:
CM: Classical Mechanics (Newton, Leibniz, LaGrange,)
Quantum theory (Planck, Einstein)
SR: Special Relativity (Einstein (1905))
Photoelectric effect (Einstein (1905))
Minkowski spacetime; Lorentz group, Poincaré group
GR: General Relativity (Einstein (1915) nb. student of Minkowkski)
QM: Quantum Mechanics
QG: Quantum Gravity
QFT: Quantum Field Theory
QED: Quantum Electrodynamics
QHD: Quantum Hydrodynamics
Gödel's Dust solution (1949)
Wheeler-Feynman Absorber Theory (1945, 1949)
SVT: Superfluid Vacuum Theory (1950,)
SPH: Smooth Particle Hydrodynamics (1970s,)
SM: Standard Model (1960s, 1970s, 2012 (Higgs boson confirmation))
LQG: Loop Quantum Gravity
QCD: Quantum Chromodynamics
GRMHD: General Relativistic Magnetohydrodynamics
SQS: Superfluid Quantum Space
SQR: Superfluid Quantum Relativity (Fedi,)
SQG: Superfluid Quantum Gravity
...
From https://news.ycombinator.com/item?id=43310970 :
> He said there's a newer version of this:
>> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/ .. https://scholar.google.com/scholar?cites=5114463164920978709...
From https://news.ycombinator.com/item?id=38034923#38061551 :
> Fedi's [SQR Superfluid Quantum Relativity] also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter
...
Evidence for scale invariance and magnetohydrodynamics:
"Braided Magnetic Flux Ropes Are Found at Both Human and Light Year Scales" (2025) https://news.ycombinator.com/item?id=44993092 :
>> One of the most exciting aspects of this research is that magnetohydrodynamics, the theory of magnetized plasmas, turns out to be fantastically scalable.
"Perihelion precession of planetary orbits solved from quantum field theory" (2025) https://cdnsciencepub.com/doi/10.1139/cjp-2018-0744
* "Perihelion precession of planetary orbits solved from quantum field theory" (2019) https://cdnsciencepub.com/doi/10.1139/cjp-2018-0744
The Software Engineers Paid to Fix Vibe Coded Messes
Typical coding LLM issues:
Hallucinations
Context limits
Lack of test coverage and testing-based workflow
Lack of actual docs
Lack of a spec
Great README; cool emoji
Sooo the LLM codes just like me ?
No; it doesn't care when it gives you incomplete garbage.
You have to tell it to validate its own work by adding to, refactoring, and running the tests before it replies.
Most junior developers do care and would never dump partial solutions on a prompter as though they're sufficient like LLMs.
Every time I remember to get `make test-coverage` working and have myself or the LLM focus on lines that aren't covered by tests.
Junior or Senior, an employee wouldn't turn in such incomplete, not compiling assignments that % of the time; even given inadequate prompts as specifications.
If you’re hiring someone remotely without any trust you could absolutely get random garbage that pretend to be real work from a human.
A human software developer doesn't code in the void, he interacts with others.
The same when you have an AI coder, you interact with it. It's not fire and forget.
well that's enough for "good-looking documentation-is-everything" kinda teams
I'd take tests over docs but that's a false dilemma.
What does the (Copilot) /tests command do, compared to a prompt like "Generate tests for #symbolname, run them, and modify the FUT function under test and run the tests in a loop until the tests pass"?
Documentation is probably key to the Django web framework's success, for example.
Resources useful for learning to write great docs: https://news.ycombinator.com/item?id=23945815
"Ask HN: Tools to generate coverage of user documentation for code" https://news.ycombinator.com/item?id=30758645
NASA offers $155,000 to design moon tires
> 24 km/h
> 18 inch wheels
It would probably be best to be able to fabricate lunar transport tires on the moon.
Which metals and other carbon-based materials handle temperatures ranging between -427F and 250F? Which of those materials can be fabricated with on the moon?
All-electric fusion induction welding would probably work on the moon (without filler gases).
"Atomic-level engineering enables new alloys that won't break in extreme cold" (2025) https://phys.org/news/2025-09-atomic-enables-alloys-wont-ext... :
"Dual-scale chemical ordering for cryogenic properties in CoNiV-based alloys" (2025) https://www.nature.com/articles/s41586-025-09458-1
What about solar sintering + induction heating of lunar regolith?
Basalt, Glass,
Molten oxide electrolysis of regolith would yield Oxygen, Silicon, Iron, Aluminum, and Titanium for 3d printing and for solar panels and semiconductors.
Social media promised connection, but it has delivered exhaustion
When social media emerged, I remember how excited I was how it could connect like-minded people around the world. Now in 2025, the leader of the biggest platforms is talking about making people less lonely by connecting them to AI chatbots instead of making people find one another. That just feels like a huge lost potential.
> When social media emerged, I remember how excited I was how it could connect like-minded people around the world.
I remember that feeling of being blown away at talking (typing) with people across the world without any limitations!
But for me this was in the late 80s and earliest 90s on the Internet. When all communication was standards-based, fully interoperable and completely free.
What we call today "social media" is just the proprietarization, for profit, of what existed before in a much more open fashion.
Social media existed before social media. We had forums for permanent collaboration (lecture hall style), and we had IRC for quicker ephemeral discussions (bar style). What we didn’t have was the focus on individuals. To have a brand means you were working on something useful for a group.
Today’s social media heavily focus on the individual, not the group, which is ironic. It’s a lot of people clamoring for attention while also consuming only through the algorithm (aka the echo feedback).
The old social media was more like going out. Instantly you feel that not everything is about you. But you still have familiar place you can hangout and useful place when you need something.
The over generalization of the term social media drives me bonkers. In the olden days we had things like message boards, forums, and chat rooms. Then came social networks. All of those terms reflect some sort of connection between people.
When I see the term social media, I associate it with one way relationships. It is about connecting businesses to customers, not the other way around. It is about connecting self-promoters (for the lack of a better term) to an audience, not the other way around. As you said: the focus is on the individual, may that be a person or a business.
Perhaps we should be making an effort to distinguish between the two environments, to avoid associating connecting businesses and self-promoters to customers with connecting people to each other.
> The old social media was more like going out
>> [Social media] is about connecting businesses to customers, not the other way around
Originally there were no business accounts, ads, or news feeds on Facebook, for example.
From https://news.ycombinator.com/item?id=35877603 :
> for the record, e.g. Facebook did originally require a .edu email address at an approving institution
What were the other pivots from that original - textual personal profile and you can only write on other peoples' walls - product to profitability?
Multi-octave frequency comb nanophotonic parametric oscillator
"Multi-octave frequency comb from an ultra-low-threshold nanophotonic parametric oscillator" (2025) https://www.nature.com/articles/s41566-025-01753-7
NewsArticle: "Chip-based laser device delivers coherent light across widest spectrum yet" (2025) https://www.yahoo.com/news/articles/chip-based-laser-device-...
Safe C++ proposal is not being continued
Rust then?
From "The state of Rust trying to catch up with Ada [video]" https://news.ycombinator.com/item?id=43007013 :
> [awesome-safety-critical]
> rustfoundation/safety-critical-rust-consortium: https://github.com/rustfoundation/safety-critical-rust-conso...
rust-lang/fls: https://github.com/rust-lang/fls
How does what FLS enables compare to these Safe C++ proposals?
Safe C++ draft: https://safecpp.org/draft.html
Beyond Traditional Pseudorandomness, Tsotchkes' Quantum Random Number Generation
tsotchke/quantum_rng: https://github.com/tsotchke/quantum_rng :
> 4.82M operations per second
> 178.45 MB/sec throughput
ingen0s/quantum_rng_rust_lib: https://github.com/ingen0s/quantum_rng_rust_lib :
> ~100M random numbers per second
What about if you then feed that into a DRBG?
TRNG based on quantum vacuum:
From https://news.ycombinator.com/item?id=44371059 :
> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
From .. https://news.ycombinator.com/item?id=43497414 :
>>> google/paranoid_crypto.lib.randomness_tests
There are NIST randomness tests, for example;
https://github.com/google/paranoid_crypto/tree/main/paranoid...
https://github.com/google/paranoid_crypto/blob/main/examples...
"Traceable random numbers from a non-local quantum advantage" (2025) https://www.nature.com/articles/s41586-025-09054-3 :
> Here we demonstrate a fully traceable random number generation protocol based on device-independent techniques. Our protocol extracts randomness from unpredictable non-local quantum correlations, and uses distributed intertwined hash chains to cryptographically trace and verify the extraction process. This protocol forms the basis for a public traceable and certifiable quantum randomness beacon that we have launched.
FWIU also there are already QRNG or QTRNG based upon photonic speckle pattern interference; LEDs and a photodiode and an FPGA or an RP2040 or better.
First 'perovskite camera' can see inside the human body
ScholarlyArticle: "Single photon γ-ray imaging with high energy and spatial resolution perovskite semiconductor (2025) https://www.nature.com/articles/s41467-025-63400-7
Fluid vacuum yields exact solutions to Pioneer anomaly and Mercury's perihelion (2019)
"Physical vacuum as a dilatant fluid yields exact solutions to Pioneer anomaly and Mercury’s perihelion precession" (2019) https://cdnsciencepub.com/doi/10.1139/cjp-2018-0744
QEMU 10.1 experimental support for compiling to WASM
Gregg Kellogg has died
> In August 4th, I went into the MarinHealth Emergency Room, due to increased stomach pain on top of symptoms which became more acute in June. I've had a reduced appetite, with consequent weight loss, for about the last year. I had been fighting to keep weight on for some time, then in July, Rebecca and I went back to our usual haunt at the Hotel Wailea in Maui, which we love. Towards the end of the trip, I had a sudden and dramatic loss of appetite, more than the usual.
It’s incredible that someone could have such symptoms for a year and not a single doctor ordered an abdominal ultrasound. Given the outcome, this might have been a blessing, he was able to live his last year without knowing about the disease, which realistically isn’t curable. But at the same time, it could just as easily have been another abdominal tumor where a year’s delay would have made a huge difference.
May he rest in peace and bless his family.
Pancreatic cancer can be curable in some cases - see the Whipple procedure:
https://en.m.wikipedia.org/wiki/Pancreaticoduodenectomy
That said, it would depend on several other factors, not least catching the tumour early enough - and it looks like a pretty tough thing to go through even if successful.
Gregg chose not to undergo surgery:
> This is major life-changing surgery with a long and difficult "recovery". I have elected not to do this, due to existing co-morbidities from my sorted past and the expectation that the recovery would exceed my lifespan, which I'd rather keep as normal as possible.
I wish I'd known about that post before he died; I'd have sent him my best regards personally rather than just saying nice things about him online now :(
/? pancreatic: https://hn.algolia.com/?q=pancreatic
Thanks for JSON-LD (and YAML-LD) and your contributions to so many W3C specifications over mailing lists and ReSpec specification documents!
Now I'll have to finish a PR to pydantic_schemaorg to build data validators for Linked Data that - by conforming to W3C Standards - enables industry and research to describe all of the things in a giant LODcloud.
yml2vocab also processes RDFS vocabularies.
JSON-LD, for example, makes it possible for all of us to find the website URL and phone number and business hours and accessibility info for an Organization > LocalBusiness Place on the map with :latitude and :longitude fields, find and annotate CreativeWorks, ScholarlyArticles, PDF DigitalDocuments, SoftwareApplications, FHIR JSON-LD,.
JSON-LD foregoes XML parser complexity (and vulns), but because JSON-LD maps to RDF with an @context, you can use vocab URIs for XML XSD types like xsd:boolean and xsd:float64 and so on. But there's yet no standard way to express a complex number like 0.8+0.8j with XSD or RDF or JSON-LD.
Longhorn – A Kubernetes-Native Filesystem
Longhorn is a poorly implemented distributed storage layer. You are better off with Ceph.
I've heard Ceph is expensive to run. But maybe that's not true?
I'm only just wading in, after years of intent. I don't feel like Ceph is particularly demanding. It does want a decent amount of ram. 1GB each for monitor, manager, and metadata, up to 16GB total for larger clusters, according to docs. But then each disk's OSD defaults to 4gb, which can add up fast!! And some users can use more. 10Gbe is recommended and more is better here but that seems not unique to ceph: syncing storage will want bandwidth. https://docs.ceph.com/en/octopus/start/hardware-recommendati...
This from 2023 says: https://www.redhat.com/en/blog/ceph-cluster-single-machine :
> All you need is a machine, virtual or physical, with two CPU cores, 4GB RAM, and at least two or three disks (plus one disk for the operating system).
Things you can do with a debugger but not with print debugging
It's not a silver bullet, but Visual Studio is leaps and bounds ahead of gdb et. al. for debugging C/C++ code. "Attach to process" and being able to just click a window is so easy when debugging a large Windows app.
lol, agree to disagree here. While the interface to gdb is annoying, there are many gui frontend alternatives.
VS, on the other hand, gets worse with every release. It is intolerably slow and buggy at this point. It used to be a fantastic piece of software, and is now a fantastic pile of shit.
Any recommendations on gdb frontends? Have tried with emacs, but I just really enjoy the point and click stuff, emacs keybinds don't work for me there.
From https://news.ycombinator.com/item?id=35710350 :
> ... py-list, py-up and py-down, py-bt, py-print, and py-locals GDB commands
> [ DDD, pyclewn (vim), trepan3k, Voltron, gdbghidra ]
gdbghidra: https://github.com/Comsecuris/gdbghidra
radare2: https://github.com/radareorg/radare2
voltron: https://github.com/snare/voltron
And from https://news.ycombinator.com/item?id=41943521 :
> pyvmidbg (libVMI + GDB)
But that's archived.
There's a QEMU gdbstub GDB interface.
To print registers with GDB:
info reg
info all-registers
i r aMarvel Studios is moving from Georgia to the UK to avoid paying health insurance
By moving to a country with universal healthcare, aren't they paying for all employees to have healthcare at a lower average annual per-capita cost?
It could be at a cost saving measure, it's not above a business to do such a thing to save a few pennies
Generate a visualization of healthcare spending by capita by country; with pandas dataframes and Manim Python
And generate an open game engine game to explore healthcare spending by capita by country; to teach the same. With e.g. UPGE (Blender), panda3d, harfang, mujoco_menagerie, Unity, or StageCraft; and then build it in WASM to work on a browser tab with the client GPU
/? per capita healthcare spending by country: https://www.google.com/search?q=per+capita+healthcare+spendi...
Also, the GBP and Euro are up on the USD since January 2025.
[flagged]
> Another film filmed in Georgia, USA; with Martin Sheen: "Vigilantes, Inc." (2025) https://youtu.be/P_XdtAQXnGE
Apparently GA was 160 for 500 on invalidating voters in the 2024 election; the state tried to eliminate the votes of some 340,000 registered voters who veritably lived at the address they registered with the state.
The producers of the film asked the same people that verify identities for Amazon and Walmart to review their list of voter invalidations.
(All this after sleazy Slump Chump (ad hominem, name calling) pushed the governor of said state for votes in 2020.)
TIL slavery was actually already illegal in Georgia before the American Revolutionary war (in the late 1700s).
> Does this effect lag or lead that effect?
TikTok has turned culture into a feedback loop of impulse and machine learning
Vine had 6 second short form video in 2012.
Vine: https://en.wikipedia.org/wiki/Vine_(service)
Short-form content: https://en.wikipedia.org/wiki/Short-form_content
YouTube still requires disconnecting connected chromecast devices to view YouTube Shorts?
Payroll employment (+22,000), unemployment rate (4.3%) change little in August
Are those the revised numbers (after the termination of the BLS commisioner this past month)?
From https://www.threads.com/@thestockmarket.news/post/DOY2Cu8DP-... :
> The US just wiped out -911,000 jobs in one revision.
> That’s 76,000 jobs per month that were never real.
> Even worse than the Great Recession’s revisions in 2009.
From https://www.threads.com/@thestockmarket.news/post/DOY2FtejG1... :
> Once a year, the BLS reconciles its survey estimates with the tax ledger. This process is called the benchmark revision.
> Most years, the difference is modest maybe a few hundred thousand jobs either way.
> This year, the correction was massive.
Ask HN: Technology Teacher Needs Validation from Smarter People
Long story short. I teach K-8 technology. Middle school students figured out how to send memes and communicate with each other via Google slides and docs. The school thinks this is terrible and I must immediately reprimand them for it.
The problem. I am actually impressed that my students found a way to communicate with one another digitally within the police state environment that is managed Google Chromebooks and GoGuardian. Yes, if any of the memes were inappropriate I get that, it's bad. But I mean the technical solution to communicating with one another uses tools outside the box (definitely at their age) from within an authoritarian local system.
What should I do? I feel like telling them that their initial inclinations are valid because information wants to be free. Whether that be digitally, printing press, gossip etc.
Long story short again, I think what they figured out is a good thing. It means they are thinking criticality about how to solve technical conditions which they consider problems. Thoughts? Any brilliant, wealthy people want to vouch for my perspective?
Why does the school need to disincentivize them working together?
Could there be a team project where they must use the groupware suite to solve for learning objectives?
In HS, we had a "students in small groups take a few weeks to prepare a lesson plan and teach one another" (with the instructor to fill in as necessary) that brought understanding.
In MS, there was a shared drive folder called "Ralph Nader _files/" - that looked like the report on politics and Save as HTML report - was full of ROMs and emulators until.
But that was 8th grade. ("Eighth Grade", "Good Boys")
It sounds like you're "good cop, bad coppin'" them. Good, good.
Perhaps there's a way to use social instincts and technology for learning objectives.
"Why does the school need to disincentivize them working together?" The school doesn't want them using this tool to send messages to one another during class or send inappropriate memes using it.
So instead of asking them not to make inappropriate memes (don't think you could honestly convince students not to chat with one another digitally during class, if option exists) they want to nuke the entire tool.
This school reprimanded me for not using go guardian to remotely monitor what websites they were trying to visit. I told my students it's a gross invasion of privacy for anyone but your parents.
Lots of problems here. They are also TERRIFIED of lawsuits. I couldn't get them to unblock neil.fun because IT convinced the principal their are links to porn sites from Neil.fun
I showed otherwise but they refused to believe their own eyes. Every year it seems I have to sell more of my soul to digital fascism to keep my job.
IDK, "fine, due to no gapps sharing due to certain issues, then everyone must write a professional ScholarlyArticle manuscript with two columns, a title, authors, and abstract at the top" (with e.g. Notepad, LyX, Overleaf, CoCalc's time slider, LaTeX in $ \text{Jupyter notebooks} $ , $$ as /sigma \textit{Markdown} $$, and code with test coverage and docs)
:Article, : ScholarlyArticle, :Thesis, "Dissertation" (for a doctoral degree)
Philosoraptor: Do people get fired for messing around on Slack and not paying attention in meetings or for underperforming?
Socially awkward penguin: Is team coordination over chat one of your strengths?
Learn to code and manage projects resources that work on school Chromebooks: Hour of Code, Hour of AI (2025), JupyterLite, jupyterlite-xeus with (PyData,) packages from emscripten-forge, Google Colab, Replit, JS Fiddle, container2wasm, vscode.dev, Dockerfile + devcontainers/devcontainers.json
These days it's pretty simple to generate a meme generator tool; but still the cost of moderation (is part of the Accounting Equation for a team/org/business with limited funding amd operating costs and margin).
Why wouldn't you want to host a service for pick a type (an rdfs:Class) of schema.org/CreativeWork?
How can employers minimize waste and maximize utility of tools for collaboration and also networking?
AI might yet follow the path of previous technological revolutions
AI is probably more of an amplifier for technological change than fire or digital computers; but IDK why we would use a different model for this technology (and teams and coping with change).
Diffusion of innovations: https://en.wikipedia.org/wiki/Diffusion_of_innovations :
> The diffusion of an innovation typically follows an S-shaped curve which often resembles a logistic function.
From https://news.ycombinator.com/item?id=42658336 :
> [ "From Comfort Zone to Performance Management" (2009) ] also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages
Transforming, Performing, Reforming, [Adjourning]
Carnal Coping Cycle: Denial, Defense, Discarding, Adaptation, and Internalization
Americans face biggest increase in health insurance costs in 15 years
Visual representations in the human brain are aligned with LLMs
Aren't the visual cortex and auditory cortex already shown to be hierarchical, though LLMs with a fixed topology aren't really ?
From https://news.ycombinator.com/item?id=41905326 :
> From https://news.ycombinator.com/item?id=40105068#40107537 re: cognitive hierarchy and specialization :
>> But FWIU none of these models of cognitive hierarchy or instruction are informed by newer developments in topological study of neural connectivity;
- Cortical columns
- The brain is at most 11D: 11 Dimensional
"Two-dimensional neural geometry underpins hierarchical organization of sequence in human working memory" (2024) https://www.nature.com/articles/s41562-024-02047-8 .. https://news.ycombinator.com/item?id=42084285
"Hierarchical Reasoning Mode" https://news.ycombinator.com/item?id=44702554
"Hierarchy or Heterarchy? A Theory of Long-Range Connections for the Sensorimotor Brain" (2025-07) https://arxiv.org/abs/2507.05888
> Abstract: [...] The key to our proposal is what we call the “Thousand Brains Theory”, which posits that every cortical column is a sensorimotor learning system. Columns learn by integrating sensory input over multiple movements of a sensor. In this view, even primary and secondary regions, such as V1 and V2, can learn and recognize complete 3D objects. This suggests that the hierarchical connections between regions are used to learn the compositional structure of parent objects composed of smaller child objects. We explain the theory by examining the different types of long-range connections between cortical regions and between the neocortex and thalamus. We describe these connections, and then suggest the specific roles they play in the context of a heterarchy of sensorimotor regions. We also suggest that the thalamus plays an essential role in transforming the pose between objects and sensors. The novel perspective we argue for here has broad implications for both neuroscience and artificial intelligence.
Is self attention sufficient to model a Hierarchical Recurrent BNN?
A better VEP Visual Evoked Potential test for pediatric ophthalmology - to tell whether and how well infants can see - could be one practical external benefit of further study of neural topology of the optic nerve and the visual cortex
Show HN: Dog Rescue Transport Coordination Website
Recently, I was part of a dog rescue transport from Denver to Frisco, CO, and it opened my eyes to a major problem: coordinating volunteers shouldn't be this complicated. Between fragmented communication (Facebook, phone calls, emails, texts...), missing handoff details, and manual route planning, I wanted to work on a better solution.
Meet puptransfur.org - purpose-built to make rescue transports seamless and paw-fect. Easily plan routes, coordinate handoffs, manage timezones automatically, and document the journey with photos. Use the demo option to create your own route, or look at an example route: puptransfur.org/route/4041
If you are part of a dog rescue and need to organize dog transports frequently, would you want to use this website? Let me know and I can guide you through the first steps and show how everything works.
Some features: - address completion when setting up the route - suggested handoff locations to easily break up the route into multiple segments - handles timezones by adjusting the times to local times - volunteer information available to other drivers (car model and color, license plate, if available) - integrated email communication when drivers sign up, and when drivers are confirmed
And the best feature I think is this one: this site can bring all volunteer drivers of all rescue organizations together in one place!
This website is free to use for administrators of dog rescues as well as for volunteer drivers. I'd love to get some feedback on this. Thanks for looking!
- Freight load boards like Uber Freight might work with you and/or run ads for a nonprofit?
- Are there medical records to transfur too?
Notes re: "Veterinary Animal EHR FHIR" and SPDA: Shelter Pet Data Alliance: https://shelterpetdata.org/ and Petco Love Lost, and a list of shelter and sanctuary softwares that don't yet support FHIR: https://github.com/jupyterhealth/jupyter-health-software
* "Veterinary Animal EHR FHIR #20" https://github.com/jupyterhealth/jupyter-health-software/iss...
Thanks for the info, I'll look into this.
Braincraft challenge – 1000 neurons, 100 seconds, 10 runs, 2 choices, no reward
> How to submit? Make a pull request with your player, assumed performance and (short) description. I'll then re-run training and evaluation (with a random seed) and add the result to the leader board.
> Why 1000 neurons? Since It Takes Two Neurons To Ride a Bicycle, I decided that 1000 neurons should be more than enough to solve such a simple task.
[...]
Bot: radius, speed, camera, camera.depths, camera.values, position, direction, energy, move_penalty, hit_penalty
Environment: energy, probability, quality, leak, refillCode.org Hour of AI
Note re: Federal AI Education Taskforce and Code.org Hour of AI: https://www.linkedin.com/posts/cameronpwilson_today-i-had-th...
GitHub/spec-kit: Get started with Spec-Driven Development
How is this distinct from workflows that include README.md, AGENTS.md or .agents/, and subagents?
"AGENTS.md – Open format for guiding coding agents": https://news.ycombinator.com/item?id=44957443
How does the proposed software development process differ from Formal Methods (i.e. Formal Specification, Implementation, and Verification)?
Stripe Launches L1 Blockchain: Tempo
There are lots of crypto skeptics on HN (and we ourselves were disappointed with crypto's payments utility for much of the past decade), so it might be interesting to share what changed our mind over the past couple of years: we started to notice a lot of real-world businesses finding utility in stablecoins. For example, Bridge (a stablecoin orchestration platform that Stripe acquired) is used by SpaceX for managing money in long-tail markets. Another big customer, DolarApp, is providing banking services to customers in Latin America. We're currently adding stablecoin functionality to the Stripe dashboard, and the first user is an Argentinian bike importer that finds transacting with their suppliers to be challenging.
Importantly, none of these businesses are using crypto because it's crypto or for any speculative benefit. They're performing real-world financial activity, and they've found that crypto (via stablecoins) is easier/faster/better than the status quo ante.
It sounds great, but every time I see this argument, I end up going down the rabbit hole of actually studying how stablecoins operate. And every time, I come to the same conclusion: they always rely on trust in an off-chain oracle or custodian. At that point, a shared ledger implemented with traditional databases / protocols would be faster, easier, and more transparent.
Bitcoin (and possibly a few others) is one of the few uses of blockchain that actually makes sense. The blockchain serves the currency, and the currency serves the blockchain. The blockchain exists to provide consensus without needing to trust any off-chain entity, but the blockchain relies on computing infrastructure that has real-world costs. The scarcity of Bitcoin (the currency) and arguably-fictitious reward for participation in mining is the incentive for people in the real world to contribute resources required for the blockchain to function.
Any real-world value given to Bitcoin is secondary and only a result of the fact that (1) mining infrastructure has a cost, and (2) people who understand the system have realized that, unlike fiat, stablecoins, or 1000 other crypto products, Bitcoin has no reliance on trusted, off-chain entities who could manipulate it.
You trust your stablecoin's issuer that they hold enough fiat in reserve to match the coin? You might as well trust your bank, but while you're at it, remind them that they don't have to take days to process a transaction - they could process transactions as fast as (actually faster than) a blockchain. But I imagine most banks would point to regulation as a reason for the delays, and they might be right.
So what are stablecoins really trying to do? Circumvent regulation? Implement something the banks just aren't willing to do themselves?
>At that point, a shared ledger implemented with traditional databases / protocols would be faster, easier, and more transparent.
This is missing the fundamental idea behind blockchain. You need a consensus mechanism and immutable ledger in order for it to be secure and truly transparent. Once you add those boom you have yourself another blockchain :-)
>So what are stablecoins really trying to do? Circumvent regulation?
No, stablecoins have less regulatory burden because of the public ledger removing the need for manual review and verification by various intermediaries. They are still compliant with regulation.
> You need a consensus mechanism and immutable ledger in order for it to be secure and truly transparent
Consensus between who? The stablecoin issuer, stripe in this case, is a single party, who are they coordinating with that requires a consensus algorithm?
How does centralized SQL replication do consensus, compared to a DLT?
Blockchain consensuses: Which is the next block, Which protocol version must what quorum upgrade to before a soft fork locks in, Whether a stake should be slashed, Leader/supernode election (handled by the UNL text file in git in rippled, which underpins R3, W3C Web Monetization micropayments, and W3C ILP Interledger protocol (which FedNow implements)),
When there are counterparties and then they might as well just off-site replicate the whole database or blockchain locally, and run indexes and queries at their expense.
And then there is a network of counterparties willing to grant liquidity to cover exchanges that cover multiple assets and chains, who want to limit their exposure by limiting the credit they extend to any one party in the network and account for an entire auditable transaction. (Interledger ILP Peering, Clearing, and Settlement)
Private blockchain or SQL replication scaling woes? And then implement mandatory keys in an append-only application.
This or something like Trillian?
From "PSA: SQLite WAL checksums fail silently and may lose data" https://news.ycombinator.com/item?id=44672902 :
> google/trillian adds Merkle hashes to table rows.
> sqlite-parquet-vtable would workaround broken WAL checksums.
> [...] [cr-sqlite implements CRDT, which is one of a number of newer ways to handle consensus in SQL database replication ]
> (How) Should merkle hashes be added to sqlite for consistency? How would merkle hashes in sqlite differ from WAL checksums?
I suspect this was downvoted in ignorance.
Do you understand how consensus matters with distributed databases and DLTs?
Do you understand the difference between WAL checksums and Merkle hashes?
If the WAL checksums are not sufficient, is the SQL database sufficient? Why are Merkle hashes not "bolted on" but native to blockchains?
How many integrity hashes should be bolted onto a SQL database for there to be replication with data integrity?
3D-Printed Scaffolds Promote Organoid Regrowth in Spinal Cord Injury
ScholarlyArticle: "3D-Printed Scaffolds Promote Enhanced Spinal Organoid Formation for Use in Spinal Cord Injury" (2025) https://advanced.onlinelibrary.wiley.com/doi/10.1002/adhm.20...
Scientists develop 'glue gun' that 3D prints bone grafts directly onto fractures
"In situ printing of biodegradable implant for healing critical-sized bone defect" (2025) https://www.cell.com/device/fulltext/S2666-9986(25)00186-3 :
> Abstract: [...] To address these challenges, we developed a portable in situ printing platform to extrude biodegradable composites directly into defect sites without prefabrication or supporting devices. By modulating components in the composites, the platform can introduce specific functionalities in tissue reconstruction (e.g., osteoconductivity and anti-infection)."
Osteopromotive: https://en.wikipedia.org/wiki/Osteopromotive :
> Osteopromotive describes a material that promotes the de novo formation of bone.
> Osteoconductivity describes the property of graft material in which it serves as a scaffold for new bone growth but does not induce bone growth de novo. This means that osteoconductive materials will only contribute to new bone growth in an area where there is already vital bone.
> Osteoinductivity describes the property of graft material in which it induces de novo bone growth with biomimetic substances, such as bone morphogenetic proteins. Such materials will contribute to new bone growth in an area where there is no vital bone, such as when implanted into muscle tissue.
The OT study glues on osteoconductive but not yet osteoinductive material?
WiFi signals can measure heart rate
No clunky wearables? No chest strap on the treadmill? Heart rate and respiration? Monitors everyone in the house simultaneously 24/7 on a cheap rpi? I hope this doesn't take years to come to market because this seems incredibly useful.
WiFi RSSI hacks (WiSee (2013),)
Linksys Aware (-2024): https://www.google.com/search?q=Linksys+Aware
From this thread https://news.ycombinator.com/item?id=45129817 :
> 802.11bf
802.11bf: https://www.google.com/search?q=802.11bf
"Whole-home gesture recognition using wireless signals" (2013) https://dl.acm.org/doi/abs/10.1145/2500423.2500436 .. https://scholar.google.com/scholar?cites=1386163076039493879...
From https://news.ycombinator.com/item?id=38246722 re: a stylus with accelerometer with many degrees of freedom and inertial measurement:
> /? wireless gesture recognition RSSI:
> /? wireless gesture recognition RSSI site:github.com
> Awesome-WiFi-CSI-Sensing: https://github.com/Marsrocky/Awesome-WiFi-CSI-Sensing
> 3D Scanning > Technology, Applications [...]
Whole-home gesture recognition sounds really cool! Has anyone actually got this running?
IDK what the error rate of gestural recognition with Wifi is. FWIU the market for e.g. the Magic Leap gestural peripheral just wasn't there. That paper says 2013.
Marsrocky/Awesome-WiFi-CSI-Sensing#gesture-recognition: https://github.com/Marsrocky/Awesome-WiFi-CSI-Sensing#gestur... :
> "Real-time Cross-Domain Gesture and User Identification via COTS WiFi" (2025)
> "One is Enough: Enabling One-shot Device-free Gesture Recognition with COTS WiFi" (2024) https://ieeexplore.ieee.org/abstract/document/10621091 .. https://scholar.google.com/scholar?cites=5141488558554953622...
Almost anything you give sustained attention to will begin to loop on itself
Given that the heart is generator which drives electrovolt oscillations through the nervous system and the fat of the brain, and that the extracerebral field created by the electrovolt potentials in the tissues of the brain is nonlinearly related to the electrical activations through the axons and dendrites in the tissues of the brain,
Are there electrical cycles in the brain (and thus feedback and probably spiking) or does the charge distribute through the brain in a DAG directed acyclic graph?
Are there stable neural correlates to ear worm or rumination or flow states, for example?
Is sustained charge necessary for data persistence in the brain, as it is for RAM?
Paraphrasing the model's reply to force myself to learn:
The brain is observed to be cyclical with feedback cycles. (Biological neural networks thus cannot be sufficiently modeled with DAGs. RNN Recurrent Neural Networks do model cycles.)
The brain is actually its own generator.
The oscillations of the brain are measurable with e.g. EEG; and are distinct from the heart, which is measurable or imaged with ECG, for example.
Long term memory depends upon synaptic plasticity, which does not require continued electrical charge, though short term memory does depend upon neuronal oscillations which depend upon continued electrical charge.
The DMN Default Mode Network in the brain is observed to be less active in so-called flow states; and more active during daydreaming, ear worm, rumination, and self-reflection. The DMN is probably feed-forward too.
Hydrogen-Powered Plasma Torch Decimates Plastic Waste in a Blink
"The Era of Hassle-Free Plastic Recycling Is Becoming a Reality! – KIMM-led Plasma Project Develops the World’s First Technology to Recycle Mixed Waste Plastics Without Sorting" (2025) https://www.kimm.re.kr/eng/sub011001/view/id/1435
Flash heating also recycles unsorted plastics (HCO) into CO/CO2,OH, and Graphene (C), but the energy usage for flash heating is probably higher.
From "'Chemical recycling': 15-minute reaction turns old clothes into useful molecules" (2024) regarding microwave-assisted pyrolysis: https://news.ycombinator.com/item?id=40876505 :
> Flash heating plastic yields Hydrogen and Graphene; "Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763 .. https://news.ycombinator.com/item?id=37886982 :
> "Green H2” from water electrolysis using renewable energy evolves no CO2, but costs 2–3× more, making it presently economically unviable. Here catalyst-free conversion of waste plastic into clean H2 along with high purity graphene is reported. The scalable procedure evolves no CO2 when deconstructing polyolefins and produces H2 in purities up to 94% at high mass yields. The sale of graphene byproduct at just 5% of its current value yields H2 production at a negative cost. Life-cycle assessment demonstrates a 39–84% reduction in emissions compared to other H2 production methods, suggesting the flash H2 process to be an economically viable, clean H2 production route.
..
Does fusion induction welding work as a method for heating plastic to recycle it? Is it any more efficient than hydrogen plasma and microwave?
Tetris is NP-hard even with O(1) rows or columns (2020) [pdf]
"From Nand to Tetris (2017)" https://news.ycombinator.com/item?id=38735066 .. From https://www.nand2tetris.org/ :
> Nand to Tetris courses are taught at 400+ universities, high schools, and bootcamps. The students who take them range from high schoolers to Ph.D. students to senior engineers. Here is an extended syllabus of a typical academic-version course.
There's now a schema.org/Syllabus Class .
> Similar: "Show HN: Tetris, but the blocks are ARM instructions that execute in the browser" https://news.ycombinator.com/item?id=37086102
What is the computational complexity of Tetris with ARM instructions?
In ASM;
Rosetta Code > Tetris: https://rosettacode.org/wiki/Tetris :
> tetromino.py - Python implementation of Tetris included with Raspbian
> What is the computational complexity of Tetris with ARM instructions?
If it is Turing complete it is undecideable. If the user only builds programs that halt, is Teris with ARM instructions of the complexity class NEXPTIME-complete (which is harder than NP-Complete)?
NEXPTIME: https://en.wikipedia.org/wiki/NEXPTIME
Complexity Zoo:N: https://complexityzoo.net/Complexity_Zoo:N
Adaptive LLM routing under budget constraints
Lunar soil machine developed to build bricks using sunlight
It would be awesome if we could land some construction robots onto the moon and Mars. Let’s get going with construction of underground settlements and research stations.
Starship can only land a tiny payload if you expect to reuse it, but if you don't you can likely land 100 tons and reuse the the vehicle for habitat, storage tanks and such. The first thing you land is a lunarized D9 Cat [1]
which digs trenches that let you bury the upper stages under 2 meters of regolith which will give good radiation protection and thermal coupling to a reservoir at a constant and comfortable temperature just below the freezing point of water. I guess you want some kind of crane for handling the Starships but you probably want one anyway if you expect to send them back.
A 20-Year-Old Algorithm Can Help Us Understand Transformer Embeddings
Violation of Bell inequality with unentangled photons
"Violation of Bell Inequality with Unentangled Photons" (2025) https://arxiv.org/abs/2507.07756 :
> By analyzing the measurement of four-photon frustrated interference within the standard Bell-test formalism, we find a violation of Bell inequality by more than four standard deviations.
If it’s true, do we get cheaper quantum dots and better atomic clocks?
Do we then need satellite internet for mobile broadband video for doctors and paramedics if information sharing by nonlocal photonic communication is real; despite the false limit and "loopholes"?
Would this simple experiment and less destructive photonic observation show the nonlocal communication described in the OT article?
"Name of this Q/QC experiment given a light polarization-entanglement complementary relation" (2025) https://quantumcomputing.stackexchange.com/questions/44435/n... :
> Given the ability to infer photonic phase from intensity, isn't it possible to determine whether destructive measurement causes state change in entangled photons? Is there a name for this experiment; and would it test this?
FWIU call blocking is not possible without centralized routing; so we wouldn't even all want quantum phones that don't need towers or satellites that may be affecting the jet stream and thereby the heat.
> Do we then need satellite internet for mobile broadband video for doctors and paramedics if information sharing by nonlocal photonic communication is real; despite the false limit and "loopholes"?
Yes we still need satellite internet. The doctors and paramedics can generate some random numbers and the hospital can generate some random numbers, and once they meet again they can look at them and see a strange correlation.
But if the hospital wants to tell something to the doctors and paramedics or vice versa, they must use a classic communication channel.
Will the bandwidth/throughput limits of entanglement-based communication systems continue to preclude their use for anything but lower bitrate applications like key distribution?
I don't want to be quoted in 1000 years like the guy that didn't believe in quantum communication, ... but my guess is that it will not provide a high bandwidth/throughput.
3D printing copper heatsinks onto processors using OLED manufacturing techniques
"Dual antibacterial properties of copper-coated nanotextured stainless steel" (2024) https://onlinelibrary.wiley.com/doi/10.1002/smll.202311546 ... https://news.ycombinator.com/item?id=40421851
Reloading Classes in Python
Pickling + unpickling the object is a neat trick to update objects to point to the new methods, but it's even more straightforward to just patch `obj.__class__ = reloaded_module.NewClass`. This is what ipython's autoreload extension used to do (and still does in some circumstances, along with other tricks to patch up old references), though nowadays it's had some improvements over this approach: https://github.com/ipython/ipython/pull/14500
Oh nice, thank you for that tip. I was doing the opposite, `new_obj = mod.Class(...)` and then assigning the dicts from the old object (which was when I realized the pickle save/load was easier).
Pytest has a number of tools for monkeypatching in tests; rpytest.monkeypatch.setattr, setitem: https://docs.pytest.org/en/stable/how-to/monkeypatch.html
Pickles aren't signed and pickle is basically eval().
Quantum Bayes' rule and Petz transpose map from the minimum change principle
> Introduction.—Usually demonstrated by simple counting arguments involving urns and balls, Bayes’ rule has actually been argued to play a much deeper role in probability theory and logic, as the only consistent system for updating one’s beliefs in light of new evidence [1, 2, 3, 4, 5, 6]. As an alternative to the above axiomatic approach, Bayes’ rule can also be derived from a variational argument: the updated belief should be consistent with the new observations while deviating as little as possible from the initial belief. This is known as the minimum change principle [7, 8, 9, 10]. It formalizes the intuition that the new information should be incorporated into the agent’s knowledge in the “least committal” way, e.g. without introducing biases unwarranted by the data. Such fundamental insights can be seen as at least a motivation, if not an explanation, for the extraordinary effectiveness of Bayesian statistical inference in virtually all areas of knowledge.
I feel like Bayes' rule is oversold though.
Is just Bayes' rule good enough for fighting spam email, for example?
How large of a Bayesian belief network is necessary to infer the equations of n-body gravity in a fluid, without other fields?
How large of a Bayesian belief network is necessary to extrapolate the motions of the planets?
Then also predict - with resource costs - the perihelion of Mercury; the deviation in the orbit of Mercury predicted by General Relativity and also the Gross-Pitaevskii equation which describes turbulent vortical fluids.
Then also - with Bayesians or a Bayesian belief network - predict the outcomes in (fluidic nonlinear) n-body gravity experiments.
Do Bayesian models converge at lowest cost given randomly initialized arbitrary priors? Do Bayesian models converge at lowest cost at describing nonlinear complex adaptive systems?
How do Bayesians compare to other methods for function approximation and nonlinear function approximation?
How do quantum Bayesians compare to other methods for function approximation and nonlinear function approximation?
Furthermore, Bayesian models are not to be applied when there is not statistical independence of observations.
"LightGBM Predict on Pandas DataFrame – Column Order Matters" (2025) https://news.ycombinator.com/item?id=43088854 :
> [ LightGBM,] does not converge regardless of feature order.
> From https://news.ycombinator.com/item?id=41873650 :
>> Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless?
> Also, current LLMs suggest that statistical independence is entirely distinct from orthogonality, which we typically assume with high-dimensional problems. And, many statistical models do not work with non-independent features.
> Does this model work with non-independence or nonlinearity?
> Does the order of the columns in the training data CSV change the alpha of the model; does model output converge regardless of variance in the order of training data?
From https://news.ycombinator.com/item?id=37462132 :
> [ quantum discord ]
> TIL the separable states problem is considered NP-hard, and many models specify independence of observation as necessary.
How does (NP-hard) quantum separability relate to statistical independence as necessary for statistical models to be appropriate?
If it is so hard to determine which particles are and aren't entangled, when should we assume statistical independence of observation?
If we cannot assume statistical independence of observation, we know that Bayesian models aren't appropriate.
Petz recovery map: https://en.wikipedia.org/wiki/Petz_recovery_map :
> In quantum information theory, a mix of quantum mechanics and information theory, the Petz recovery map can be thought of a quantum analog of Bayes' theorem
Wikipedia as a Graph
This isn’t the same thing at all, I merely comment to train the next generation LLMs and perhaps help people finding what they want, but Wikipedia as a graph can also refer to Wikidata, which is a knowledge graph of Wikipedia and other Wikimedia websites.
dbpedia extracts Wikipedia into RDF Linked Data.
Here's the dbpedia page about DBpedia; https://dbpedia.org/resource/DBpedia which is extracted from the wikipedia page about DBpedia: https://en.wikpedia.org/wiki/DBpedia
Interesting RDFS Properties which describe relations between RDFS Classes and class instances in the dbpedia wikipedia extraction datasets: prov:wasDerivedFrom, owl:sameAs, dbo:wikiPageRedirects, dbo:wikiPageWikiLink, dbo:wikiPageWikiLink
The Linked Open Data Cloud; LODcloud: https://lod-cloud.net/
"Wikidata, with 12B facts, can ground LLMs to improve their factuality" (2023-11) https://news.ycombinator.com/item?id=38304290#38309408
/? knowledge graph llm: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=kno...
/? site:github.com inurl:awesome knowledge graph llm: https://www.google.com/search?q=site%253Agithub.com+inurl%25...
To train the robots as well
A mechanical quantum memory for microwave photons
> Abstract: [...] By using two-pulse dynamical decoupling sequences, we can extend the coherence time (T2) from 64 μs to 1 ms. These findings establish that mechanical oscillators can act as quantum memories for superconducting devices, with potential future applications in quantum computing, sensing and transduction.
Spin loss into energy: New principle could enable ultra-low power devices
> But now, the team has found that spin loss actually has the opposite effect, altering magnetization. This means that spin loss induces a spontaneous magnetization switch within the magnetic material, just as the balloon moves as a reaction to the wind being taken out of it.
> In their experiments, the team demonstrated the paradox that the greater the spin loss, the less power is required to switch magnetization. As a result, the energy efficiency is up to three times higher than conventional methods, and it can be realized without special materials or complex device structures, making it highly practical and industrially scalable.
"Magnetization switching driven by magnonic spin dissipation" (2025) https://www.nature.com/articles/s41467-025-61073-w :
> Abstract: [...] we present an unconventional approach that exploits magnon dissipation for magnetization control, rather than mitigating it. By combining a single ferromagnetic metal with an antiferromagnetic insulator that breaks symmetry in spin transport across the layers while preserving the symmetry in charge transport, we realize considerable spin-orbit torques comparable to those found in non-magnetic metals, enough for magnetization switching. Our systematic experiments and comprehensive analysis confirm that our findings are a result of magnonic spin dissipation, rather than external spin sources
Standard Thermal: Energy Storage 500x Cheaper Than Batteries
Long-term thermal storage is something I've been fascinated with the last year or so.
Heat loss inside of dirt is so incredibly slow it's hard to wrap your head around. One fact that I find helps is the fact that after an entire winter of extremely cold temperatures, you only need to go down 10 ft or so before you hit the average annual temperature. 4 months of winter buffered by 10 ft of ground!
Obviously there is incredible potential to this even if you just keep the energy as heat. The amount of electricity we use on heating and air conditioning is huge. If we could just create hot and cold piles or underground wells or something that we could tap into 4 months later when the temperature has changed, you would have completely solved heating and cooling.
Really excited by companies looking into this and wish them the best of luck!
> you only need to go down 10 ft or so before you hit the average annual temperature
Is this because of geothermal energy leaking upwards? If so, it's not the dirt, it's the geothermal energy.
> Is this because of geothermal energy leaking upwards
No. The heat energy comes from the sun. Power flux from geothermal is measured in milliwatts per square meter, while the sun can provide more than a kilowatt during the day. So real geothermal heating is negligible at the surface. That's why the temperature a few feet down equals the average annual temperature at the surface.
The only reason people call this "geothermal" is because marketing people realized that this sounds more impressive than "ground source heat pump". It really should not be called "geothermal", because that's something very different. Real geothermal involves extremely deep drilling (not feasible for residential use) or unusual geology.
Geothermal energy: https://en.wikipedia.org/wiki/Geothermal_energy
Geothermal heating > Extraction (GCHE, GHX) || Ground source heat pump (GSHP) https://en.wikipedia.org/wiki/Geothermal_heating
GSHP: Ground source heat pump: https://en.wikipedia.org/wiki/Ground_source_heat_pump
Heat pump: https://en.wikipedia.org/wiki/Heat_pump #Types :
> Air source heat pumps are the most common models, while other types include ground source heat pumps, water source heat pumps and exhaust air heat pumps.
Heat pump > Types:
- SAHP: Solar-assisted heat pump; w/ PV
- acronym for a heat pump with TPV thermophotovoltaic heat to electricity:
- acronym for a heat pump with thermoelectric heat to electricity:
- TAHP: Thermoacoustic heat pump
- ECHP: Electrocaloric heat pump
Electrocaloric effect > Electrocaloric cooling device studies: https://en.wikipedia.org/wiki/Electrocaloric_effect#Electroc...
GCHE, GHX: Ground-coupled heat exchanger: https://en.wikipedia.org/wiki/Ground-coupled_heat_exchanger
Acronyms! From https://www.google.com/search?q=Ground-coupled+heat+exchange... :
HGHE: Horizontal Ground Heat Exchanger: a GCHE installed horizontally e.g. in trenches
VGHE: Vertical Ground Heat Exchanger: GCHE installed vertically e.g. in boreholes or piles.
PGHE: Pile Ground Heat Exchanger: A specific type of GCHE that is integrated into the structural foundation piles of a building.
Solar chimney or Thermal chimney: https://en.wikipedia.org/wiki/Solar_chimney
OTEC: Ocean Thermal Energy Conversion: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio... and the ecological salinity gradient:
FWIU archimedes spiral turbines power some irrigation pumps in Holland at least. Is there an advantage to double/helical archimedes spirals in heat pumps if/as there is in agricultural irrigation?
Screw turbine: https://en.wikipedia.org/wiki/Screw_turbine
Noiseless double-helical Achimedes spiral wind turbine on a pivot like a pinwheel: Liam F1 average output with 5m/s wind: 1500 kWh/yr (4.11 kWh/day); Weight: ~100 kg / ~220 lbs; Diameter: 1.5 m / 4.92 ft
What about CO2 and heat pumps? Would a CO2 heat pump make sense?
Absorption Heat pump (AHP) https://en.wikipedia.org/wiki/Absorption_heat_pump
Adsorption Heat pump (AHP)
CO2-Sorption Heat Pump: a Adsorption Heat pump (AHP) that uses CO2 as the adsorbate.
NISH: Nano-Ionic Sorption Heat Pump; with e.g. sustainable hydrogels
Is it better to just recover waste heat from other processes; in a different loop?
LDES heat pump
Supercritical CO2 heat pump
Aerogels don't require supercritical drying anymore,
There's also buoyancy. The pyramid builders may have used buoyancy in a column of heated bubbly water to avoid gravity, in constructing the pyramids as a solar thermohydrodynamic system with water pressure.
Photonics and co-packaged optics may become mandatory for AI data centers
Nanophotonics: https://en.wikipedia.org/wiki/Nanophotonics
Silicon photonics: https://en.wikipedia.org/wiki/Silicon_photonics
Are graphene photonics silicon photonics?
Can Solar Farms Save the Bees?
"Insect populations flourish in the restored habitats of solar energy facilities" (2024) https://www.anl.gov/article/insect-populations-flourish-in-t...
ScholarlyArticle: "If you build it, will they come? Insect community responses to habitat establishment at solar energy facilities in Minnesota, USA" (2024) https://iopscience.iop.org/article/10.1088/1748-9326/ad0f72
From https://www.yahoo.com/news/articles/experts-uncover-incredib... :
> The results speak for themselves. One study found that insect abundance tripled over five years at two Minnesota solar sites. Native bee populations skyrocketed twentyfold. That's some serious pollinator power.
/? solar meadow: https://www.google.com/search?q=solar+meadow
Also for the bees:
"Scientists found the missing nutrients bees need – Colonies grew 15-fold" (2025) https://news.ycombinator.com/item?id=45000823 :
> "Engineered yeast provides rare but essential pollen sterols for honeybees" (2025) https://www.nature.com/articles/s41586-025-09431-y
Dynamically patch a Python function's source code at runtime
And wouldn't it be nice if that Python code, instead of a string, was just more python? Then you could use your existing Python code to append, or transform sections of your code!
That's what Lisp is!
Once you see how cool that is, then you can begin to appreciate why Lisp was the defacto standard for AI programing all the way back in the 1960s!
Ah, so in Python, you have "normal code" then you have AST code. Imagine that they were exactly the same, and whenever you're writing "normal code", you're at the same time writing AST code and vice-versa.
So whenever you want, you can start using "normal code" for manipulating the "normal code" itself, and hopefully now we have yet another perspective on the same thing, for why Lisps are so awesome :)
Is there a good way to verify self-modifying code - in Lisp, or Combo (MOSES), or Python - at runtime against a trusted baseline at loader time?
Dynamic metaprogramming is flexible but dangerous. Python is also "dynamic", meaning that code can be changed at runtime instead of only being able to accidentally pass null function pointers.
Python's metaclasses function similarly to Lisp's macros but in a consistent way: most Python code uses the standard metaclasses so that the macros don't vary from codebase to codebase. The Django "magic removal" story was about eliminating surprise and non-normative metaclassery, for example.
Does this tool monkey patch all copies of a function or just the current reference? There are many existing monkey patching libraries with tests
Scientists found the missing nutrients bees need – Colonies grew 15-fold
"Engineered yeast provides rare but essential pollen sterols for honeybees" (2025) https://www.nature.com/articles/s41586-025-09431-y
Google Develops KFuzzTest for Fuzzing Internal Linux Kernel Functions
From the linked LKML post:
> To demonstrate this framework's viability, support for KFuzzTest has been prototyped in a development fork of syzkaller, enabling coverage-guided fuzzing. To validate its end-to-end effectiveness, we performed an experiment by manually introducing an off-by-one buffer over-read into [...]
"kfuzztest: a new kernel fuzzing framework" (2025-08) https://lwn.net/Articles/1033619/
google/syzkaller @main: https://github.com/google/syzkaller
But there is also fuzztest?
google/fuzztest: https://github.com/google/fuzztest
WebR – R in the Browser
Jupyterlite-xeus compiles jupyterlab to WASM.
jupyterlite-xeus builds jupyterlite, Jupyter xeus kernels, and the specified dependencies to WASM with packages from conda-forge or emscripten-forge.
The jupyterlite-xeus docs say that the xeus-r kernel is already supported: https://github.com/jupyterlite/xeus
jupyter-xeus/xeus-r: https://github.com/jupyter-xeus/xeus-r
emscripten-forge/recipes already has packages for "r-askpass, r-base, r-base64enc, r-bit, r-bit64, r-cachem, r-cli, r-colorspace, r-data.table, r-digest, r-dplyr, r-ellipsis, r-fansi, r-farver, r-fastmap, r-ggrepel, r-glue, r-haven, r-hexbin, r-htmltools, r-isoband, r-jsonlite, r-later, r-lattice, r-lazyeval, r-magrittr, r-mass, r-matrix, r-mgcv, r-mime, r-nlme, r-plyr, r-promises, r-purrr, r-rcpp, r-readr, r-rlang, r-sp, r-stringi, r-sys, r-tibble, r-tidyr, r-tzdb, r-utf8, r-vctrs, r-vroom, r-xfun, r-yaml" in WASM: https://github.com/emscripten-forge/recipes/tree/main/recipe...
It looks like xeus-r and webr both compile with emscripten; for which there's emscripten-forge which is like conda-forge but for browser WASM.
LunarEngine: An open source, Roblox-compatible game engine
Does this increase local testability and thus QA-ability for roblox devs?
I was just looking at trying to get Lemur (archived) running in Lune in order to run jest tests running in a react-lua app the other day. I have a start at a test runner with optional in-game output, but getting jest tests to run at init in studio in order to not require run-in-roblox which doesn't yet work on Linux with vinegar flatpaks studio or vinegar in a devcontainer. It would save so much trouble if RobloxStudio.exe could take `--place game.rbxlx --script test_runner.lua --keep-open` args and regularly flush console output to a file.
westurner:lemur:patch_for_roblox_support: https://github.com/LPGhatguy/lemur/compare/master...westurne... .. new require() implementation in lune v0.10: https://github.com/lune-org/lune/issues/311#issuecomment-320...
I started to add loadPlaceFile to read an rbxlx to lemur and thought it probably the wrong place given that it's archived. TIL about Librebox, which can hopefully run tests with Jest with this stemgame react-lua app I've MIT licensed, in local CI too years later.
There is a hosted CI service for running Luau code in Roblox places.
"[Beta] Open Cloud Engine API for Executing Luau" https://devforum.roblox.com/t/beta-open-cloud-engine-api-for...
Advantages to running tests locally: record screenshots and screencasts and save on test failure, immediate feedback, -i/--interactive drop into game session on test failure
Manim: Animation engine for explanatory math videos
This gets submitted quite regularly to HN with many good discussions- so instead of posting the discussions I'll just link the search.
Is there an awesome list for manim the software?
Manim: Math Animation
Src: ManimCommunity/manim: https://github.com/ManimCommunity/manim
Docs: https://docs.manim.community/en/stable/
GH topic: manim: https://github.com/topics/manim :
manimML, manim-physics, chanim, manim-web (dart), JAnim (java), ranim (rust), manim-voiceover, git-sim, TheoremExplainAgent, reactive-manim, jupyter-manim, manim-sideview (vscode), manim-studio (Qt, Cairo)
ManimCommunity/awesome-manim has a list of creators that create with manim: https://github.com/ManimCommunity/awesome-manim
/?youtube manim: https://www.youtube.com/results?sp=mAEA&search_query=Manim+
Manim and LLMs; LLMs are great for first drafts but fix this sentence, almost working API examples, and links to manim API docs for manim
From https://news.ycombinator.com/item?id=39296310 re: StageCraft / UE:
> "Ask HN: What's the state of the art for drawing math diagrams online?" (2023) https://news.ycombinator.com/item?id=38355444 ; generative-manim, manimGPT, BlenderGPT, ipyblender [ Blender MCP, ]
generative-manim: https://github.com/marcelo-earth/generative-manim
manimGPT: https://chatgpt.com/g/g-dtA3t9WRW-manimgpt
What are some of the similarities and differences between Subagents to dev on Manim the software, and Subagents to teach with manim?
AGENTS.md, awesome-claude-code-subagents > Language specialists, :https://github.com/VoltAgent/awesome-claude-code-subagents#0...
A prompt prefix for Manim with really any LLM:
Generate Manim Python code, to visually demonstrate and visually explain,
Generate Manim Python code With reactive pattern like reactive-manim and components like MathTex and MathString, to visually demonstrate and visually explain,
Braided Magnetic Flux Ropes Are Found at Both Human and Light Year Scales
> One of the most exciting aspects of this research is that magnetohydrodynamics, the theory of magnetized plasmas, turns out to be fantastically scalable.
"Magnetic Double Helix" (2025) https://journals.aps.org/prl/abstract/10.1103/sz9k-6l22 :
> Magnetic flux ropes
From https://news.ycombinator.com/item?id=43603048 .. From https://news.ycombinator.com/item?id=43044159 :
>>> Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
"Deterministic remote entanglement using a chiral quantum interconnect" (2025) https://www.nature.com/articles/s41567-025-02811-1
"Magnetohydrodynamics" ... supermagnetohydrodynamics; supergravitomagnetohydrodynamics because what about gravity too
Marco Fedi's SQR; SQR predicts superfluids (Bose-Einstein Condensates) and also predicts Mercury's perihelion.
A Gross-Pitaevskii model of the solar system predicts the orbits of the planets including the perihelion of Mercury.
SQR incorporates Gross-Pitaevskii.
/? drodyn , pitaev , gravito https://westurner.github.io/hnlog/#
- https://news.ycombinator.com/item?id=42067233#42082690
- https://news.ycombinator.com/item?id=31383784 :
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2017)
>> Due to non-zero, positive viscosity of the SQS, and to Bernoulli pressure, these vortices attract the surrounding quanta, pressure decreases and the consequent incoming flow of quanta lets arise a gravitational potential. This is called superfluid quantum gravity. In this model we don't resort to gravitons. Once comparing superfluid quantum gravity with general relativity, it is evident how a hydrodynamic gravity could fully account for the relativistic effects attributed to spacetime distortion, where the space curvature is substituted by flows of quanta. Also special relativity can be merged in the hydrodynamics of a SQS and we obtain a general simplification of Einstein's relativity under the single effect of superfluid quantum gravity.
But what are the limits of scale invariance in these fields?
Re: unification of QG with QFT and the Standard Model: https://news.ycombinator.com/item?id=40478219 :
> But the Standard Model Lagrangian doesn't describe n-body gravity, n-body quantum gravity, photons in Bose-Einstein Condensates; liquid light in superfluids and superconductors, black hole thermodynamics and external or internal topology, unreversibility or not, or even fluids with vortices or curl that certainly affect particles interacting in multiple fields.
Are magnetic flux ropes similar to helically polarized astrophysical jets like solar CMEs and the quasar jet that is pointed at earth? https://news.ycombinator.com/item?id=39132365
Do helical or double-helical plasmas effectively beam energy?
How is a magsail different from a light sail?
Are there any deployed magsails? Are there combo solar+magsails?
A magsail could be built out of a superconducting loop.
Nitro: A tiny but flexible init system and process supervisor
I'm always torn when I see anything mentioning running an init system in a container. On one hand, I guess it's good that it's designed with that use case in mind. Mainly, though, I've just seen too many overly complicated things attempted (on greenfield even) inside a single container when they should have instead been designed for kubernetes/cloud/whatever-they-run-on directly and more properly decoupled.
It's probably just one of those "people are going to do it anyway" things. But I'm not sure if it's better to "do it better" and risk spreading the problem, or leave people with older solutions that fail harder.
From my experience in the robotics space, a lot of containers start life as "this used to be a bare metal thing and then we moved it into a container", and with a lot of unstructured RPC going on between processes, there's little benefit in breaking up the processes into separate containers.
Supervisor, runit, systemd, even a tmux session are all popular options for how to run a bunch of stuff in a monolithic "app" container.
My experience in the robotics space is that containers are a way to not know how to put a system together properly. It's the quick equivalent of "I install it on my Ubuntu, then I clone my whole system into a .iso and I call that a distribution". Most of the time distributed without any consideration for the open source licences being part of it.
I've always advocated against containers as a means of deploying software to robots simply because to my mind it doesn't make sense— robots are full of bare-metal concerns, whether it's udev rules, device drivers, network config, special kernel or bootloader setup, never mind managing the container runtime itself including startup, updating, credentials, and all the rest of it. It's always felt to me like by the time you put in place mechanisms to handle all that crap outside the container, you might as well just be building a custom bare metal image and shipping that— have A/B partitions so you copy an update from the network to the other partition, use grub chainloading, wipe hands on pants.
The concern regarding license-adherence is orthogonal to all that but certainly valid. I think with the ROS ecosystem in particular there is a lot of "lol everything is BSD/Apache2 so we don't even have to think about it", without understanding that these licenses still have an attribution requirement.
For workstations with GPUs and various kernel modules, rpm-ostree + GRUB + Native Containers for the rootfs and /usr and flatpaks etc on a different partition works well enough.
ostree+grub could be much better at handling failover like switches and rovers that then need disk space for at least two separate A/B flash slots and badblocks and a separate /root quota. ("support configuring host to retain more than two deployments" https://github.com/coreos/rpm-ostree/issues/577#issuecomment... )
Theoretically there's a disk space advantage to container layers.
Native Containers are bare-metal host images as OCI Images which can be stored in OCI Container Registries (or Artifact registries because packages too). GitHub, GitLab, Gitea, GCP, and AWS all host OCI Container/Artifact Registries.
From https://news.ycombinator.com/item?id=44401634 re bootc-image-builder and Native Containers and ublue-os/image-template, ublue-os/akmods, ublue-os/toolboxes w/ "quadlets and systemd" (and tini is already built-in to Docker and Podman) though ublue/bazzite has too many patches for a robot:
> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/
SBOM tools can scan hosts, VMs, and containers to identify software versions and licenses for citation and attribution. (CC-BY-SA requires Attribution if the derivative work is distributed. AGPL applies to hosted but not necessarily distributed derivative works. There's choosealicense.com , which has a table of open source license requirements in an Appendix: https://choosealicense.com/appendix/ )
BibTeX doesn't support schema.org/SoftwareApplication or subproperties of schema:identifier for e.g. the DOI URN of the primary schema.org/ScholarlyArticle and it's :funder(s).
...
ROS on devices, ROS in development and simulation environments;
Conda-forge and RoboStack host ROS Robot Operating System as conda packages.
RoboStack/ros-noetic is ROS as conda packages: https://github.com/RoboStack/ros-noetic
gz-sim is the new version of gazebosim, a simulator for ROS development: https://github.com/conda-forge/gz-sim-feedstock
From https://news.ycombinator.com/item?id=44372666 :
> mujoco_menagerie has Mujoco MJCF XML models of various robots.
Mujoco ROS-compatibility: https://github.com/google-deepmind/mujoco/discussions/990
Moveit2: https://github.com/moveit/moveit2 :
> Combine Gazebo, ROS Control, and MoveIt for a powerful robotics development platform.
RoboStack has moveit2 as conda packages with clearly-indicated patches for Lin/Mac/Win: ros-noetic-moveit-ros-visualization.patch: https://github.com/RoboStack/ros-noetic/blob/main/patch/ros-...
...
Devcontainer.json has been helpful for switching between projects lately.
devcontainer.json can reference a local container/image:name or a path to a ../Dockerfile. I personally prefer to build a named image with a Makefile, though vscode Remote Containers (devcontainers extension) can build from a Dockerfile and, if the devcontainer build succeeds, start code-server in the devcontainer and restart vscode as a client of the code-server running in the container so that all of the tools for developing the software can be reproducibly installed in a container isolated from the host system.
It looks like it's bootc or bootc-image-builder for building native container images?
bootc-image-builder: https://github.com/osbuild/bootc-image-builder
Launch HN: BlankBio (YC S25) – Making RNA Programmable
Hey HN, we're Phil, Ian and Jonny, and we're building BlankBio (https://blank.bio). We're training RNA foundation models to power a computational toolkit for therapeutics. The first application is in mRNA design where our vision is for any biologist to design an effective therapeutic sequence (https://www.youtube.com/watch?v=ZgI7WJ1SygI).
BlankBio started from our PhD work in this area, which is open-sourced. There’s a model [2] and a benchmark with APIs access [0].
mRNA has the potential to encode vaccines, gene therapies, and cancer treatments. Yet designing effective mRNA remains a bottleneck. Today, scientists design mRNA by manually editing sequences AUGCGUAC... and testing the results through trial and error. It's like writing assembly code and managing individual memory addresses. The field is flooded with capital aimed at therapeutics companies: Strand ($153M), Orna ($221M), Sail Biomedicines ($440M) but the tooling to approach these problems remains low-level. That’s what we’re aiming to solve.
The big problem is that mRNA sequences are incomprehensible. They encode properties like half-life (how long RNA survives in cells) and translation efficiency (protein output), but we don't know how to optimize them. To get effective treatments, we need more precision. Scientists need sequences that target specific cell types to reduce dosage and side effects.
We envision a future where RNA designers operate at a higher level of abstraction. Imagine code like this:
seq = "AUGCAUGCAUGC..."
seq = BB.half_life(seq, target="6 hours")
seq = BB.cell_type(seq, target="hepatocytes")
seq = BB.expression(seq, level="high")
To get there we need generalizable RNA embeddings from pre-trained models. During our PhDs, Ian and I worked on self-supervised learning (SSL) objectives for RNA. This approach allows us to train on unlabeled data and has advantages: (1) we don't require noisy experimental data, and (2) the amount of unlabeled data is significantly greater than labeled. However the challenge is that standard NLP approaches don't work well on genomic sequences.Using joint embedding architecture approaches (contrastive learning), we trained model to recognize functionally similar sequences rather than predict every nucleotide. This worked remarkably well. Our 10M parameter model, Orthrus, trained on 4 GPUs for 14 hours, beats Evo2, a 40B parameter model trained on 1000 GPUs for a month [0]. On mRNA half-life prediction, just by fitting a linear regression on our embeddings, we outperform supervised models. This work done during our academic days is the foundation for what we're building. We're improving training algorithms, growing the pre-training dataset, and making use of parameter scaling with the goal of designing effective mRNA therapeutics.
We have a lot to say about why other SSL approaches work better than next-token prediction and masked language modeling: some of which you can check out in Ian's blog post [1] and our paper [2]. The big takeaway is that the current approaches of applying NLP to scaling models for biological sequences won't get us all the way there. 90% of the genome can mutate without affecting fitness so training models to predict this noisy sequence results in suboptimal embeddings [3].
We think there are strong parallels between the digital and RNA revolutions. In the early days of computing, programmers wrote assembly code, managing registers and memory addresses directly. Today's RNA designers are manually tweaking sequences, improving stability or reduce immunogenicity through trial and error. As compilers freed programmers from low-level details, we're building the abstraction layer for RNA.
We currently have pilots with a few early stage biotechs proving out utility of our embeddings and our open source model is used by folks at Sanofi & GSK. We're looking for: (1) partners working on RNA adjacent modalities (2) feedback from anyone who's tried to design RNA sequences what were your pain points?, and (3) Ideas for other applications! We chatted with some biomarker providing companies, and some preliminary analyses demonstrate improved stratification.
Thanks for reading. Happy to answer questions about the technical approach, why genomics is different from language, or anything else.
- Phil, Ian, and Jonny
founders@blankbio.com
[0] mRNABench: https://www.biorxiv.org/content/10.1101/2025.07.05.662870v1
[1] Ian’s Blog on Scaling: https://quietflamingo.substack.com/p/scaling-is-dead-long-li...
[2] Orthrus: https://www.biorxiv.org/content/10.1101/2024.10.10.617658v3
[3] Zoonomia: https://www.science.org/doi/10.1126/science.abn3943
The other day I paired an article on pyroptosis caused by marine spongiibacter exopolysaccharide and an mRNA Cancer vaccine article. I started to just forward the article on bacterially-induced pyroptosis to the cancer vaccine researchers but stopped to ask an LLM whether the approaches shared common pathways or mechanisms of action and - fish my wish - they are somehow similar and I had asked a very important question that broaches a very active area of research.
How would your AI solution help with finding natural analogs of or alternatives to or foils of mRNA procedures?
Can EPS3.9 cause pyroptosis cause IFN-I cause epitope spreading for cancer treatment?
Re: "Sensitization of tumours to immunotherapy by boosting early type-I interferon responses enables epitope spreading" (2025) https://www.nature.com/articles/s41551-025-01380-1
How is this relevant to mRNA vaccines?:
"Ocean Sugar Makes Cancer Cells Explode" (2025) https://scitechdaily.com/ocean-sugar-makes-cancer-cells-expl... ... “A Novel Exopolysaccharide, Highly Prevalent in Marine Spongiibacter, Triggers Pyroptosis to Exhibit Potent Anticancer Effects” (2025) DOI: 10.1096/fj.202500412R https://faseb.onlinelibrary.wiley.com/doi/10.1096/fj.2025004...
This is really interesting - I'm going to be honest I'm not an immunologist so this is my (LLM assisted) understanding of your comment:
The immune system recognizes a sugar as a PAMP, or Pathogen-Associated Molecular Pattern, which is a signature of a potential microbial threat.
This initiates pyroptosis an inflammatory form of programmed cell death causing the cell to burst. This rupture releases tumor antigens and DAMPs (Damage-Associated Molecular Patterns), which are "danger signals" from the dying cell
The release of DAMPs shifts the Tumor Microenvironment (TME) from an immunologically "cold" to a "hot" state, promoting a potent Type I Interferon (IFN-I) response.
The release of DAMPs shifts the Tumor Microenvironment (TME) from an immunologically "cold" to a "hot" state, promoting a potent Type I Interferon (IFN-I) response.
This response recruits Antigen Presenting Cells (APCs), which engulf the newly released tumor antigens.
---
mRNA vaccines are somewhat of a parallel approach where the antigen selection and delivery happens manually. An mRNA vaccine delivers the encoding sequence for specific tumor antigens to drive production and presentation, training the immune system. One of the big challenges of this space is optimal antigen selection from the patient's tumor.
One thing I'm not fully clear on is why only tumor cell react to PAMP instead of healthy cells. Could be a promising approach but molecular biology is pretty tricky and the devil is always in the details.
> "why only tumor cell react to PAMP instead of healthy cells"
I am not a scientist, but I believe that "normal" cells do not seek long-chain alien sugars like those produced by ocean bacteria. Conversely, "cancerous" cells may find these uncommon sugars appealing, and they consume sugar eagerly (Warburg effect).
After the alien sugars are metabolized, fragments migrate to the cell membrane and might be recognized by the immune system as foreign.
The fact that large molecules trigger Pyroptosis may be helpful.
It’s not wrong that "\u{1F926}\u{1F3FC}\u200D\u2642\uFE0F".length == 7 (2019)
I think that string length is one of those things that people (including me) don't realise they never actually want. In a production system, I have never actually wanted string length. I have wanted:
- Number of bytes this will be stored as in the DB
- Number of monospaced font character blocks this string will take up on the screen
- Number of bytes that are actually being stored in memory
"String length" is just a proxy for something else, and whenever I'm thinking shallowly enough to want it (small scripts, mostly-ASCII, mostly-English, mostly-obvious failure modes, etc) I like grapheme cluster being the sensible default thing that people probably expect, on average.
ASCII is very convenient when it fits in the solution space (it’d better be, it was designed for a reason), but in the global international connected computing world it doesn’t fit at all. The problem is all the tutorials, especially low level ones, assume ASCII so 1) you can print something to the console and 2) to avoid mentioning that strings are hard so folks don’t get discouraged.
Notably Rust did the correct thing by defining multiple slightly incompatible string types for different purposes in the standard library and regularly gets flak for it.
> Notably Rust did the correct thing
In addition to separate string types, they have separate iterator types that let you explicitly get the value you want. So:
String.len() == number of bytes
String.bytes().count() == number of bytes
String.chars().count() == number of unicode scalar values
String.graphemes().count() == number of graphemes (requires unicode-segmentation which is not in the stdlib)
String.lines().count() == number of lines
Really my only complaint is I don't think String.len() should exist, it's too ambiguous. We should have to explicitly state what we want/mean via the iterators. String.graphemes().count()
That's a real nice API. (Similarly, python has @ for matmul but there is not an implementation of matmul in stdlib. NumPy has a matmul implementation so that the `@` operator works.)ugrapheme and ucwidth are one way to get the graphene count from a string in Python.
It's probably possible to get the grapheme cluster count from a string containing emoji characters with ICU?
Any correctly designed grapheme cluster handles emoji characters. It’s part of the spec (says the guy who wrote a Unicode segmentation library for rust).
In the long run, LLMs make us dumber
Libre-Chip Awarded NLnet Grant to Prototype a CPU Isn't Vulnerable to Spectre
Speculative execution > Variants,: https://en.wikipedia.org/wiki/Speculative_execution
Transient execution CPU vulnerability: https://en.wikipedia.org/wiki/Transient_execution_CPU_vulner...
Does RISC-V have speculative execution?
A RISC-V CPU out of graphene would be more efficient.
openhwgroup has open Verilog implementations of RISC-V cores from 2 stages through 6 stages that will boot Linux: https://github.com/openhwgroup
There are various optional feature flags for RISC-V.
The RISC-V open ISA is probably advantageous especially for research implementations.
In a first, Google has released data on how much energy an AI prompt uses
Isn't it less for Google using their TPU compared to everyone else using nvidia?
TPUs are more efficient at LLMs because they do more TOPS/KWh.
"OpenTPU: Open-Source Reimplementation of Google Tensor Processing Unit (TPU)" (2025) https://news.ycombinator.com/item?id=44111452
"A Comparison of the Cerebras Wafer-Scale Integration Technology with Nvidia GPU-based Systems for Artificial Intelligence" (2025) https://arxiv.org/html/2503.11698v1
From https://news.ycombinator.com/item?id=44648575 :
> "Next-generation datacenters consume zero water for cooling" (2024) https://news.ycombinator.com/item?id=42376406
>> this design will avoid the need for more than 125 million liters of water per year per datacenter
"Microsoft’s Datacenter Community Pledge: To build and operate digital infrastructure that addresses societal challenges and creates benefits for communities" (2024-06) https://blogs.microsoft.com/blog/2024/06/02/microsofts-datac... :
> We will design and operate our datacenters to support society’s climate goals and become carbon negative, water positive and zero waste before 2030. [...]
> By 2025, we will procure 100% renewable energy on a global scale, both significantly expanding and decarbonizing local electricity grids.
> Our datacenter designs are more water efficient than traditional enterprise datacenters, and our plan by 2030 is to replenish more water than we consume locally.
Here's this about CNT cooling:
"Cyberpower begins selling desktop PCs with carbon nanotube CPU cooling" (2025) https://news.ycombinator.com/item?id=44899495
"A carbon-nanotube-based tensor processing unit" (2024) https://news.ycombinator.com/item?id=41322070
Graphene semiconductors should be at least 10X more energy efficient; but how much less water would graphene-based chips waste?
Ask HN: Are tech layoffs due to AI displacing or due to AI pilots failing?
"90% of game developers are working with LLMs"
"90% of corporate AI pilots have failed"
"Entry-level engineers can't get hired" for expecting to AI things or for not knowing qualified engineer things like data structures, algorithms, refactoring theory and terminology?
Are tech layoffs due to AI displacing or due to AI pilots failing?
It might be possible that AI has gotten so expensive that some employers won't be able to afford real people for the foreseeable future until after some kind of financial recovery occurs.
AI cofounders aren't that expensive. Electricity is becoming more expensive.
The UN SDG Goals, Targets, and Indicators list problem to solve that should be worth money.
How are acquisition cycles linked with broader economic conditions?
(I'm with the "you're going to have to hire those back when you realize what's happened to code quality and maintainability" camp.)
This.
Critical Cache Poisoning Vulnerability in Dnsmasq
Many router firmwares have dnsmasq for DNS but may never be upgraded?
There are a number of other DNS servers which are not written in C, which support transport-secured DNS like DoH (DNS-over-HTTP), DoT, and DoQ; but do they correctly handle this malformed input?
From the mailing list disclosure, which doesn't yet have a CVE FWIU? https://lists.thekelleys.org.uk/pipermail/dnsmasq-discuss/20... :
Dnsmasq forwards queries with special characters (e.g., ~, !, *, _) to upstream recursive resolvers.
Some upstream recursive resolvers silently discard such malformed queries (no NXDomain/ServFail response).
Dnsmasq does not validate or detect this situation, and waits silently, creating a large attack window.
During this window, attackers can brute-force TxID (16-bit) and source port (16-bit) with a high probability of success (birthday paradox effect).
Security Impact
Attackers can poison any cached domain name in Dnsmasq.
[...] We recommend adding:
Detection mechanisms when upstream resolvers remain silent.
Rate limiting and spoof-detection techniques, similar to those in PowerDNS
> PowerDNS Mitigation: https://docs.powerdns.com/recursor/settings.html#spoof-nearm... spoof-nearmiss-maxExplosive neural networks via higher-order interactions in curved manifolds
"Explosive neural networks via higher-order interactions in curved statistical manifolds" (2025) https://arxiv.org/abs/2408.02326 :
> Abstract: [...] By leveraging a generalisation of the maximum entropy principle, we introduce curved neural networks as a class of models with a limited number of parameters that are particularly well-suited for studying higher-order phenomena. Through exact mean-field descriptions, we show that these curved neural networks implement a self-regulating annealing process that can accelerate memory retrieval, leading to explosive order-disorder phase transitions with multi-stability and hysteresis effects. Moreover, by analytically exploring their memory-retrieval capacity using the replica trick, we demonstrate that these networks can enhance memory capacity and robustness of retrieval over classical associative-memory networks [...]
Spiking transistors model non-linear change in state. Would spiking transistors be useful for physically realizing the "explosive" behavior modeled in "Explosive neural networks via higher-order interactions in curved statistical manifolds" (2025) https://arxiv.org/abs/2408.02326
"Synaptic and neural behaviours in a standard silicon transistor" (2025) https://www.nature.com/articles/s41586-025-08742-4
Are spiking transistors useful for this too?
SystemD Service Hardening
Why don't distros flip more of these switches? Are there cons of being more aggressive with these settings? It's really a lot for many people to tinker with.
Because they/we don't have sufficient integration tests to verify that the core system services are working after tightening down each parameter.
From https://news.ycombinator.com/item?id=29995566 :
> Which distro has the best out-of-the-box output for?:
systemd-analyze security
desbma/shh generates SyscallFilter and other systems unit rules from straces similar to how audit2allow generates SELinux policies by grepping for AVC denials in permissive mode (given kernel parameters `enforcing=0 selinux=1`), but should strace be installed in production?:desbma/shh: https://github.com/desbma/shh
Neural Visibility Cache for Real-Time Light Sampling
"Neural Visibility Cache for Real-Time Light Sampling" (2025) https://arxiv.org/abs/2506.05930
Llama-Scan: Convert PDFs to Text W Local LLMs
Looking at the code, this converts PDF pages to images, then transcribes each image. I might have expected a pdftotext post-processor. The complexity of PDF I guess ...
Shell: GNU parallel, pdftotext
Python: PyPdf2, PdfMiner.six, Grobid, PyMuPdf; pytesseract (C++)
paperetl is built on grobid: https://github.com/neuml/paperetl
annotateai: https://github.com/neuml/annotateai :
> annotateai automatically annotates papers using Large Language Models (LLMs). While LLMs can summarize papers, search papers and build generative text about papers, this project focuses on providing human readers with context as they read.
pdf.js-hypothes.is: https://github.com/hypothesis/pdf.js-hypothes.is:
> This is a copy of Mozilla's PDF.js viewer with Hypothesis annotation tools added
Hypothesis is built on the W3C Web Annotations spec.
dokieli implements W3C Web Annotations and many other Linked Data Specs: https://github.com/dokieli/dokieli :
> Implements versioning and has the notion of immutable resources.
> Embedding data blocks, e.g., Turtle, N-Triples, JSON-LD, TriG (Nanopublications).
A dokieli document interface to LLMs would be basically the anti-PDF.
Rust crates: rayon handles parallel processing, pdf-rs, tesseract (C++)
pdf-rs examples/src/bin/extract_page.rs: https://github.com/pdf-rs/pdf/blob/master/examples/src/bin/e...
AI doesn't lighten the burden of mastery
This is a really good post. I'm a naturally controlling person, and I care about my craft a lot, so even in my recent dabbling (on a ~3000 LOC project) with agentic coding, one of the things I naturally did from the start was not just skim the diffs that the AI generated, but decide for myself what technologies should be used, describe the logic and architecture of the code I wanted in detail — to keep my mental model fresh and accurate — and read every single line of code as if it was someone else's, explicitly asking the AI to restructure anything that I didn't feel was the way I'd implemented it — thus ensuring that everything fit my mental model, and going in and manually adding features, and always doing all debugging myself as a natural way to get more familiar with the code.
One of the things I noticed is that I'm pretty sure I was still more productive with AI, but I still had full control over the codebase, precisely because I didn't let AI take over any part of the mental modelling part of the role, only treating it as, essentially, really really good refactoring, autocompletion, and keyboard macro tools that I interact with through an InterLISP-style REPL instead of a GUI. It feels like a lever to actually enable me to add more error handling, make more significant refactors for clarity to fit my mental model, and so on. So I still have a full mental model of where everything is, how it works, how it passes data back and forth, and the only technologies I'm not familiar with the use of in the codebase are things I've made the explicit choice not to learn because I don't want to (TKinter, lol).
Meanwhile, when I introduced my girlfriend (a data scientist) to the same agentic coding tool, her first instinct was to essentially vibe code — let it architect things however it wanted, not describe logic, not build the mental model and list of features explicitly herself, and skim the code (if that) and we quickly ended up in a cul de sac where the code was unfixable without a ton of work that would've eliminated all the productivity benefits.
So basically, it's like that study: if you use AI to replace thinking, you end up with cognitive debt and have to struggle to catch up which eventually washes out all the benefits and leaves you confused and adrift
Having read parts of e.g. the "Refactoring" and "Patterns of Enterprise Architecture" books and ThoughtWorks and Fowler web pages and blog posts, and "The Clean Coder", and about distributed computing algorithms; I've been working with a limited set of refactoring terms in my prompts like "factor out", "factor up", "extract an interface/superclass from".
TIL according to Wikipedia, the more correct terms are "pull up" and "push down".
How should they learn terms for refactoring today? Should they too train to code and refactor and track customer expectations without LLMs? There's probably an opportunity to create a good refactoring exercise; with and without LLMs and IDEs and git diff.
System Prompt, System Message, User, User Prompt, Agent, Subagent, Prompt Template, Preamble, Instructions, Prompt Prefix, Few-Shot examples; which thing do we add this to:
First, summarize Code Refactoring terms in a glossary.
Code refactoring: https://en.wikipedia.org/wiki/Code_refactoring
"Ask HN: CS papers for software architecture and design?" (2017) https://news.ycombinator.com/item?id=15778396
"Ask HN: Learning about distributed systems?" (2020) https://news.ycombinator.com/item?id=23932271
Would methods for software quality teams like documentation and tests prevent this cognitive catch-up on so much code with how much explanation at once?
Generate comprehensive unit tests for this. Generate docstrings and add comments to this.
If you build software with genai from just a short prompt, it is likely that the output will be inadequate in regards to the unstated customer specifications and that then there will need to be revisions. Eventually, it is likely that a rewrite or a clone of the then legacy version of the project will be more efficient and maintainable. Will we be attached to the idea of refactoring the code or to refactoring the prompts and running it again with the latest model too?
Retyping is an opportunity to rewrite! ("Punch the keys" -- Finding Forrester)
Are the prompts worth more than the generated code now?
simonw/llm by default saves all prompt inputs and outputs in a sqlite database. Copilot has /save and gemini-cli has /export, but they don't yet autosave or flush before attempting to modify code given the prompt output?
Catch up as a human coder, Catch up the next LLM chat context with the prior chat prompt sequences (and manual modifications, which aren't but probably should be auto-committed distinctly from the LLM response's modifications)
Hardware and software for scanning and OCR old magazines
I have a collection of old history magazines, starting from 1969, that I would like to scan and convert to text. What tools can be used?
Book scanning: https://en.wikipedia.org/wiki/Book_scanning
awesome-scanning lists Devices, Software: https://github.com/ad-si/awesome-scanning
book scanner: https://hn.algolia.com/?q=book+scanner :
- Foot pedal book scanner
Thank you!
NP!
/?awesome-selfhosted "scan" : https://github.com/awesome-selfhosted/awesome-selfhosted#doc... :
- DMS: Document Management System
- paperless-ngx
- papermerge
But then to do search snippets and/or genai with citations of scanned PDFs, images, and hopefully .txt and .md too: https://news.ycombinator.com/item?id=44321180 :
> paperai/paperetl, paperqa2, paperqa-zotero,
PyG 2.0: Scalable Learning on Real World Graphs
PyG: pytorch_geometric: https://github.com/pyg-team/pytorch_geometric
Graphene capacitors achieve rapid, high-depth modulation of terahertz waves
"Achieving 100% amplitude modulation depth in the terahertz range with graphene-based tuneable capacitance metamaterials" (2025) https://www.nature.com/articles/s41377-025-01945-4
Deep-Sea Desalination Pulls Fresh Water from the Depths
They don't talk about pollution, some pollution will drop off while coagulating microplastics can be much higher. The whole ocean is basically a fractionating column. Of course they are going to want to dump the salt in the bottom to complete the mass transfer loop of the upwelling water. This is going to mess up the whole thing.
Humans should be operating in closed water systems. We would have to do that anywhere else we go, we should be turning Earth into well run spaceship.
Filter it out already, problem solved. Look for solutions, not for problems. If microplastics do indeed concentrate in the depths this would offer a chance to take them out of the environment, the same goes for other pollutants.
Aren't there radioisotopes in sea water? If you're filtering microplastics out of deep sea water you might as well collect those too?
"Fungus breaks down ocean plastic" (2024) https://news.ycombinator.com/item?id=40676239
> Of course they are going to want to dump the salt in the bottom to complete the mass transfer loop of the upwelling water.
This method of desalination is designed to limit hyperaccumulation of salt in the ocean and the apparatus:
"Extreme salt-resisting multistage solar distillation with thermohaline convection" (2023) https://www.cell.com/joule/abstract/S2542-4351(23)00360-4 .. "Desalination system could produce freshwater that is cheaper than tap water" (2023) https://news.ycombinator.com/item?id=39507702 :
> Here, inspired by a natural phenomenon, thermohaline convection, we demonstrate a solar-powered multistage membrane distillation with extreme salt-resisting performance. Using a confined saline layer as an evaporator, we initiate strong thermohaline convection to mitigate salt accumulation and enhance heat transfer.
The thermal difference between the deep sea water and surface water (or waste heat heated water, or solar heated water) can be used to generate electricity.
"140-year-old ocean heat tech could supply islands with limitless energy" https://news.ycombinator.com/item?id=38222695 :
OTEC: Ocean thermal energy conversion: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio...
"Ask HN: Does OTEC work with datacenter heat, or thermoelectrics?" https://news.ycombinator.com/item?id=40821522 .. "Ask HN: How to reuse waste heat and water from AI datacenters?" https://news.ycombinator.com/item?id=40820952
At 40-44% efficient given at least 1,435°C, Solid state thermoelectrics are more efficient than steam turbines at converting a thermal gradient to electricity.
"Renewables Game-Changer? 44% Efficient TPV Cell" (2024) https://eepower.com/tech-insights/renewables-game-changer-44...
Thermophotovoltaic energy conversion: https://en.wikipedia.org/wiki/Thermophotovoltaic_energy_conv...
"Using solar energy to generate heat at 1050°C high temperatures" (2024) https://news.ycombinator.com/item?id=40419617
NASA Systems Engineering Handbook (2018) [pdf]
awesome-safety-critical lists a number of Handbooks, but not yet this one. https://awesome-safety-critical.readthedocs.io/en/latest/ :
> NASA-GB-8719.13 - 2004-03-31 - NASA Software Safety Guidebook
>> NASA’s Software Safety Guidebook (pdf file). The handbook complement to the Software Safety Standard.
> NASA-HDBK-8709.24 - 2015-11-23 - NASA Safety Culture Handbook
Launch HN: Embedder (YC S25) – Claude code for embedded software
Hey HN - We’re Bob and Ethan from Embedder (https://embedder.dev), a hardware-aware AI coding agent that can write firmware and test it on physical hardware.
Here’s a demo in which we integrate a magnetometer for the Pebble 2 Duo smartwatch: https://www.youtube.com/watch?v=WOpAfeiFQkQ
We were frustrated by the gap between coding agents and the realities of writing firmware. We'd ask Cursor to, say, write an I2C driver for a new sensor on an STM32, and it would confidently spit out code that used non-existent registers or HAL functions from the wrong chip family. It had no context, so it would just guess and the code is always wrong.
Even when it wrote the right code, the agent had no way of interacting with your board and the developer would have to manually test it and prompt the agent again to fix any bugs they found. Making current solutions not ideal when working in an embedded context.
That’s why we are building Embedder, a hardware-aware coding agent that is optimized for work in embedded contexts. It understands your datasheets and schematics and can also flash and test on your hardware.
First, you give it context by uploading datasheets, reference manuals, schematics, or any other documentation on our web console and our coding agent will automatically have context when it executes tasks in the command line.
Second, Embedder can directly interact with your hardware to close the development loop. The agent is able to use a serial console just like a regular developer to read from your board and verify outputs. To solve more complex bugs or identify hardware issues the coding agent is also able to launch a debugging agent optimized for step through debugging workloads and interact with local or remote gbdservers.
You can try it out today. It’s an npm package you can install and run from your terminal:
npm i -g @embedder/embedder && embedder
It's free for the rest of this month while we're in beta. After that, we're planning a usage based model for individual developers and a team plan with more advanced features.We’d love to get feedback from the community, or hear about your experiences of embedded development. We’ll be in the comments to respond!
One test case that I've used with LLMs that generate code: generate a stop light.
In short,
Generate a stoplight with unit tests.
I have not yet found a model that produces sufficiently safe code. I haven't tested this in awhile; but I don't expect any current LLM to be sufficient at this task.
Maybe an LLM could generate a safe stop light with formal methods as a primary meta procedure? From https://news.ycombinator.com/item?id=44889967 re: formal methods:
> We should not expect an LLM trained solely on formally-verified code to produce formally-verified code. I don't think that also training on specs and hateful training material will fix that.
ARM adds neural accelerators to GPUs
Study: Social media probably can't be fixed
Do all of these points apply to the traditional media funhouse mirror that we love to hate, too?
> "The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve."
I see illegal war, killing without due process, and kleptocracy. It's partly the media's fault. It's partly the peoples' fault for depending on advertising to subsidize free services, for gawking, for sharing without consideration, for voting in ignorance.
Social media reflects the people; who can't be "fixed" either.
If you're annoyed with all of these people on here who are lesser than and more annoying than you, then stop spending so much time at the bar.
Can the bar be fixed?
PYX: The next step in Python packaging
To be honest, this was just a matter of time. As a long time Python developer, I just can’t wrap my head around the lack of something like this. GitHub was going to get hosted packages for Python but never did because it “didn’t align with their strategy objectives and a reallocation of resources” [1] (or some other similar corpospeak) Astral is a great company and I think we can’t question what they’ve achieved and provided to the Python community. uv is a game changer and solves one of the core issues with Python by providing a unified tool that’s also fast, reliable and easy to use. In fact, after using uv for the first time (coming from a combination of pyenv + poetry) I never wanted to go back and this is something all of my peers have experienced too. I’m glad it’s Astral who is doing this, and of course they will have to make money one way or another (which is perfectly fine and I don’t think anyone on this forum can be against that, as long as they are actually providing real value) but I was honestly tired of the paralysis on this matter. I did try to build a registry (pyhub.net) but being just one person with almost no resources and having another full time business made it impossible. Anyway, congrats to the team for the effort! [1] https://github.com/orgs/community/discussions/8542
[deleted]
I'm worried it might get bad
Don't worry -- soon after that, there will be high demand for human coders as companies scramble to hire to rewrite all the buggy and vuln-ridden AI-hallucinated software. We're on the verge of two revolutions in tech, not one.
I wouldn’t bet on this
Why not? Seems to match the trend of tech innovation creating more demand for tech.
The reality is lots of software problems can’t be solved with the level of “intelligence” LLMs have. And if they could, it wouldn’t be just software in danger - it’d be every human profession. Even the physical ones, since AI would quickly figure out how to build machinery to automate those.
>> Why not?
Because it's just cope. Look at the current reality. Are companies rushing to fix bad or even buggy code written by human devs? No, not in most cases. In most cases, if a piece of code "works", it is left the hell alone. And that's the thing about AI code: it does work. The quality is irrelevant in the overwhelming majority of cases (especially if it's other AIs that are adding to it, which is the case more and more often).
> The quality is irrelevant in the overwhelming majority of cases
Software quality is especially important in safety critical applications.
We should not expect an LLM trained solely on formally-verified code to produce formally-verified code. I don't think that also training on specs and hateful training material will fix that.
So then we're back to the original software engineering objectives of writing better SAST, DAST, Formal Method, side channel, and fuzzing tools for software quality assurance.
I've been to birthday parties that employed more people than "safety critical" software development. We're talking about 99.99% of the software development jobs evaporating.
I think we're talking like 100% of everyone gets a new power saw!
Compare traditional woodworking with modern carpentry on quality, longevity, and marginal efficiency.
From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :
> Which universities teach formal methods?
> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?
> Is there a certification for formal methods? Something like for Engineer status in other industries?
How to safely escape JSON inside HTML SCRIPT elements
What about CDATA; which XML and XHTML support? HTML5 does not support CDATA.
CDATA: https://en.wikipedia.org/wiki/CDATA
This would work for XHTML but not HTML5 IIUC: <script>
</script>OpenSSH Post-Quantum Cryptography
In light of the recent hilarious paper around the current state of quantum cryptography[1], how big is the need for the current pace of post quantum crypto adoption?
As far as I understand, the key material for any post quantum algorithm is much, much larger compared to non-quantum algorithms which leads to huge overheads in network traffic and of course CPU time.
The page only talks about adopting PQC for key agreement for SSH connections, not encryption in general so the overhead would be rather minimal here. Also from the FAQ:
"Quantum computers don't exist yet, why go to all this trouble?"
Because of the "store now, decrypt later" attack mentioned above. Traffic sent today is at risk of decryption unless post-quantum key agreement is used.
"I don't believe we'll ever get quantum computers. This is a waste of time"
Some people consider the task of scaling existing quantum computers up to the point where they can tackle cryptographic problems to be practically insurmountable. This is a possibilty. However, it appears that most of the barriers to a cryptographically-relevant quantum computer are engineering challenges rather than underlying physics. If we're right about quantum computers being practical, then we will have protected vast quantities of user data. If we're wrong about it, then all we'll have done is moved to cryptographic algorithms with stronger mathematical underpinnings.
Not sure if I'd take the cited paper (while fun to read) too seriously to inform my opinion the risks of using quantum-insecure encryption rather than as a cynical take on hype and window dressing in QC research.
It's been "engineering challenges" for 30 years. At some point, "engineering challenges" stops being a good excuse, and that point was about 20 years ago.
At some point, someone may discover some new physics that shows that all of these "engineering challenges" were actually a physics problem, but quantum physics hasn't really advanced in the last 30 years so it's understandable that the physicists are confused about what's wrong.
You might be right that we'll never have quantum computers capable of cracking conventional cryptographic methods, but I'd rather err on the side of caution in this regard considering how easy it is to switch, and how disastrous it could be otherwise.
"A First Successful Factorization of RSA-2048 Integer by D-Wave Quantum Computer" (2025-06) https://ieeexplore.ieee.org/document/10817698
Yeah, except when your "2048-bit" numbers are guaranteed to have factors that differ by exactly two bits, you can factor them with any computer you want.
The D-wave also isn't capable of Shor's algorithm or any other quantum-accelerated version of this problem.
I was at a lecture by a professor who's working in the field, his main argument was that quantum computers are physically impossible to scale.
He presented us with a picture of him and a number of other very important scientists in this field, none of them sharing his attitude. We then joked that there is a quantum entanglement of Nobel prize winners in the picture.
I don't think that that professor was correct.
The universe is constantly doing large, scaled quantum computations.
The number of error-corrected qubits per QC will probably increase at an exponential rate.
Whether there is a problem decomposition strategy for RSA could change.
Oh, entanglement and the prize! Adherence to Bell's is abstruse and obtuse. Like attaching to a student of Minkowkski's who served as an honorable patent examiner in Europe who moved to America. We might agree that there are many loopholes by which information sharing through entanglement is possible; that Bell's theorem is not a real limit to communications or QC because there are many "loopholes to"
D-Wave themselves do not emphasize this use case and have said many times that they don't expect annealing quantum computers to be used for this kind of decryption attack. Annealers are used for optimization problems where you're trying to find the lowest energy solution to a constraint problem, not Shor's Algorithm.
In that sense, they're more useful for normal folks today, and don't pose as many potential problems.
How to make almost anything (2019)
See also: 2020 Version with videos: https://fab.cba.mit.edu/classes/863.20/
The "Week 8: Molding and Casting" link 404s.
This is important because bioplastics are so tensile.
Ideas for another week of material?
Programmable matter, nanoscale self-assembly, AI material design
Curious about the training data of OpenAI's new GPT-OSS models? I was too
OP seems to have run a programming language detector on the generated texts, and made a graph of programming language frecuencies: https://pbs.twimg.com/media/Gx2kvNxXEAAkBO0.jpg?name=orig
As a result, OP seems to think the model was trained on a lot of Perl: https://xcancel.com/jxmnop/status/1953899440315527273#m
LOL! I think these results speak more to the flexibility of Perl than any actual insight on the training data! After all, 93% of inkblots are valid Perl scripts: https://www.mcmillen.dev/sigbovik/
That inkblot thing can be created for any language.
China sets its first renewable standards for steel, cement and polysilicon
What a good idea!
> Newly built data centres in so-called national hub nodes must use at least 80% green electricity, while targets for the other industries vary by province.
FWIU aluminum waste can feed into steel production; (would be) waste outputs can be inputs to other processes.
For example "red mud" is a waste output of aluminum production.
"Green steel from red mud through climate-neutral hydrogen plasma reduction" (2024) https://www.nature.com/articles/s41586-023-06901-z
Double-slit experiment holds up when stripped to its quantum essentials
I am more interested in its explanation, now that the theory has been proven correct again and again.
Especially interested in "delayed choice quantum erasure experiment", where you decide to determine the "which path" after the photon has passed through the slits and hit the detector. And depending on your later decision the photon seems to rewrite history going back in time.
I don’t have a source to hand at the moment, but when I looked into the famous Delayed Choice Quantum Erasure experiment the consensus seemed to be:
- The double slit experiment’s conclusions still hold, but:
- The particularly exciting and stark results of the Quantum Erasure experiment may have been misinterpreted or miscommunicated to the public, in particular:
- The presenter of PBS SpaceTime has said that he regrets certain things about how he worded his video on the Quantum Erasure experiment, and I think may have left a comment on the video to that effect.
Every time I look into QM, I keep coming back to the same fundamental axiom: “Quantum Mechanics’ weirdnesses can make otherwise straightforward things frustrating, but will never make interesting inventions possible.” Like how entanglement is able to break locality (which is frustrating) but without breaking causality (which would be interesting). If you hear about a quantum principle and think “Wow, I could use that to build X,” then it’s more likely that you’re not fully understanding the principle (not “you” specifically, I’ve fallen for this myself countless times).
The only exception seems to be Quantum Computing, but even that only arises out of a deep deep mathematical analysis (you can’t get to QC on your own from the things in popular science books) and is only applicable to really niche applications.
Entanglement doesn't violate locality, it's measurement that does that. And that's because we don't have a rigourous handle on what measurement actually is, and why we call it "the measurement problem"!
Didn't they originally use polarizing filters to measure photonic phase?
If it were possible to measure the phase of a photon after a beam splitter in a nondestructive way, shouldn't it be possible to determine whether measuring one causes state collapse in the other?
This says that photonic entanglement is polarization, and that photonic phase can be inferred from second order of intensity, IIUC:
"Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev...
Shouldn't it then be possible to nondestructively measure photons and thus entanglement?
> If it were possible to measure the phase of a photon after a beam splitter in a nondestructive way
"Non-destructive measurement" is an oxymoron. It's not a real measurement if it doesn't destroy the coherence of entanglement. Weak measurements do destroy some entanglement, just not "all" of it.
> "Non-destructive measurement" is an oxymoron. It's not a real measurement if it doesn't destroy the coherence of entanglement.
If there were no loopholes to Bell's theorem I would agree.
> Weak measurements do destroy some entanglement, just not "all" of it.
IDK if that's true. Are all methods of observing probabilistic states destructive forms of measurement?
Does a camera on a candle diminish the candle, or does it take energy (and information) from the "wake" of the field moments or field disturbances?
I don't think that anyone realizes that it's possible to infer photonic phase from intensity (by Huygens-Steiner).
> Weak measurements do destroy some entanglement, just not "all" of it.
Which measure of degree of entanglement best characterizes state linkage across spacetime?
Depending on definition, doesn't a laser pointer entangle phase states across spacetime, but only slower than c (the speed of transverse photonic waves in a total vacuum)? Are the states synchronized with a constant delay?
Quantum discord: https://en.wikipedia.org/wiki/Quantum_discord :
> In quantum information theory, quantum discord is a measure of nonclassical correlations between two subsystems of a quantum system. It includes correlations that are due to quantum physical effects but do not necessarily involve quantum entanglement.
Quantum mutual information: https://en.wikipedia.org/wiki/Quantum_mutual_information
Show HN: Synchrotron, a real-time DSP engine in pure Python
Yes, Python.
I can already hear the screams from the rafters telling me how terrible of a choice Python is - but in my case, I valued modularity, extensibility, hackability over raw performance. (It was also a challenge to myself to see how far I can get without referencing existing implementations)
Synchrotron processes nodes: simple Python classes with typed I/O and a render() method for processing. It can be as concise as 5 lines:
class IncrementNode(Node):
input: StreamInput
output: StreamOutput
def render(self, ctx):
self.out.write(self.a.read(ctx) + 1)
Nodes can then be spawned and linked programmatically or in the graphical editor. Synchrotron handles the rest at runtime. Besides the web UI, you can also interact with the engine via Python, REST, DSL, or standalone TUI.Currently you can build synths, FX chains, MIDI instruments, arpeggiators, controllers, or just mess about with sound :>
Editor: https://synchrotron.thatother.dev/ Source: https://github.com/ThatOtherAndrew/Synchrotron
It's still experimental (and my first ever shipped project), but I'd love feedback from people who tinker with audio/DSP/live coding. Docs are terrible currently, but that's my next big goal!
Does it work on mobile yet?
Is there a way to load an example file; maybe with another button next to "Load file"?
Notes re: spotify/pedalboard (JUCE) and node-based UIs or "patch bay" UIs: https://news.ycombinator.com/item?id=44604024#44648290
Re: samin/polymath23, which does BPM and tonal quality classification: https://news.ycombinator.com/item?id=34782526
No mobile support as of yet. (Not pleasantly, at least.)
Currently there is a hard-coded example file loaded with the demo command, but that's the extent of automatic demo loading. Planning to improve this in future alongside a full frontend rewrite (too much tech debt).
Thanks for the extra links, looks like a good read :>
Electric motor runs without metal coils
"Core-sheath composite electric cables with highly conductive self-assembled carbon nanotube wires and flexible macroscale insulating polymers for lightweight, metal-free motors" (2025) https://link.springer.com/article/10.1007/s42114-025-01302-4
Running C++ on Cloudflare WASM
There may be a way to run container2wasm containers on CF WASM?
There's a vscode-container-wasm-gcc-example with gcc but not yet g++ installed via apt: https://github.com/ktock/vscode-container-wasm-gcc-example
Interesting thanks, I wasn't aware of container2wasm. I do wonder what the output sizes are. They don't mention compatibility with CF's runtime, it is more restrictive than any of the ones they do!
Electron beam irradiation decomposes Teflon-like fluoroplastics efficiently
Im surprised they didn't mention the beam energy in the snippet, and unfortunately I can't get the full paper at the moment. I think Teflon is known to be particularly susceptible to radiation damage from gamma rays, which makes sense as that will produce recoil electrons inside the material.
From "Getting hit by lightning is good for some tropical trees" (2025) https://news.ycombinator.com/item?id=43506262 :
>> "Gamma radiation is produced in large tropical thunderstorms" (2024)
>> "Gamma rays convert CH4 to complex organic molecules [like glycine,], may explain origin of life" (2024)
TIL that gamma radiation destroys Teflon. And lighting storms produce gamma radiation.
And that "Lightning on Earth is sparked by a powerful chain reaction from outer space, simulations show" (2025) https://www.livescience.com/physics-mathematics/lightning-on... :
"Photoelectric Effect in Air Explains Lightning Initiation and Terrestrial Gamma Ray Flashes" (2025) https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2025JD04...
Ask HN: Does your company back up its GitHub/Gitlab source code?
Show HN: Mathpad – Physical keypad for typing math symbols
Here's something different than your usual fare: A physical keypad that lets you directly type math!
Ever tried typing mathematical equations in your code IDE, email, or on Slack? You might know it can be tricky. Mathpad solves this with dedicated keys for Greek letters, calculus symbols, and more. Press the ∫ key and get ∫, in any application that accepts text. It uses Unicode composition, so it works everywhere: Browsers, chat apps, code editors, Word, you name it. Basically, anywhere you can type text, Mathpad lets you type mathematics.
I built Mathpad after getting frustrated with the friction of typing equations in e.g. Word, and what a pain in the ass it was to find the specific symbols I needed. I assumed that a product like Mathpad already existed, but that was not true and I had to build it myself.
It turned out to be pretty useful! Three years of solo development later, I'm launching on Crowd Supply. One of the trickiest parts of this project was finding someone who could manufacture custom keycaps with mathematical symbols. Shoutout to Loic at 3dkeycap.com for making it possible!
Fully open source (hardware + software): https://github.com/Summa-Cogni/Mathpad Campaign: https://www.crowdsupply.com/summa-cogni/mathpad Project log: https://hackaday.io/project/186205-mathpad-the-math-keypad
How much faster is Mathpad than creating a per-document table of symbols with their Unicode numbers and/or latex values and copy/pasting until you remember the Ctrl-Shift-u nnnn sequence?
Not much faster if you only need a few symbols, and if you only work in one document. I used to make such tables for large documents before I created Mathpad.
Mathpad's killer feature is working anywhere you can type text, not only document editors. I've found it particularly useful when putting together technical presentations in Powerpoint, and when documenting the algorithms I write at work which are rather math and physics heavy.
ReproZip – reproducible experiments from command-line executions
Maybe they're just using "experiment" as some kind of data-scientist jargon that I don't understand, but this reads to me like just a way to package Python code, and from the description I don't understand why or when I would prefer this to making an sdist or wheel with standard tools.
Edit: I guess the idea is that this is automatically discovering non-Python system dependencies and attempting to include them as well? Either way, the developers should probably get in touch with the people behind https://pypackaging-native.github.io/ which has been trying to identify and solve problems with using the standard Python ecosystem tools in the "PyData ecosystem". (This effort has led to proposals such as https://peps.python.org/pep-0725/.)
Does manylinux help with this? https://news.ycombinator.com/item?id=43553198 :
> Manylinux requires tools called auditwheel for Linux, delocate for MacOS, and delvewheel for windows; which do something like ldd to list the shared libraries.
From the auditwheel readme: https://github.com/pypa/auditwheel :
> auditwheel show: shows external shared libraries that the wheel depends on (beyond the libraries included in the manylinux policies), and checks the extension modules for the use of versioned symbols that exceed the manylin
> auditwheel repair: copies these external shared libraries into the wheel itself, and automatically modifies the appropriate RPATH entries such that these libraries will be picked up at runtime. This accomplishes a similar result as if the libraries had been statically linked without requiring changes to the build system. Packagers are advised that bundling, like static linking, may implicate copyright concerns
PyInstaller docs: https://pyinstaller.org/en/stable/ :
> PyInstaller bundles a Python application and all its dependencies into a single package. The user can run the packaged app without installing a Python interpreter or any modules. PyInstaller supports Python 3.8 and newer, and correctly bundles many major Python packages such as numpy, matplotlib, PyQt, wxPython, and others.
conda/constructor is a tool for creating installers from conda packages: https://github.com/conda/constructor
Grayskull creates conda-forge recipes from PyPI and other packages: https://github.com/conda/grayskull
conda-forge builds for Windows, Max, Linux, amd64, and arm4. and emscripten-forge builds conda packages for WASM WebAssembly.
SBOM tools attempt to discover package metadata, which should include a manifest with per-file checksums. Can dependency auto-discovery discover package metadata relevant to software supply chain security?
dvc is a workflow tool layered on git that supports Experiments: https://dvc.org/doc/start/experiments/experiment-tracking :
> Experiment: A versioned iteration of ML model development. DVC tracks experiments as Git commits that DVC can find but that don't clutter your Git history or branches. Experiments may include code, metrics, parameters, plots, and data and model artifacts.
A sufficient packaging format must have per-file checksums and signatures. https://SLSA.dev/ says any of TUF, Sigstore.dev, and/or OCI containers with signatures suffice.
All of these tools definitely help for the people who use them. In particular, the manylinux standard and associated tools are why I can reliably `pip install numpy` without even thinking about whether it will work, and regardless of whether (on Linux) there is a system package for OpenBLAS (which will be disregarded, unless of course you use a system-packaged version of Numpy instead). But there are also definitely still unmet needs.
`pip install numpy` does not install the most optimal build for a given platform, or e.g. MKL or BLAS -linked packages. `pip install numpy-mkl` is not the official way as those binary wheels are built by a third-party.
From https://news.ycombinator.com/item?id=37808036 :
> conda-forge maintainer docs > Switching BLAS implementation: https://conda-forge.org/docs/maintainer/knowledge_base.html#...
rattler-build supports CPU levels and CUDA levels. Thus conda-forge packages may be more performant on modern CPUs and GPUs than the average PyPI package: https://news.ycombinator.com/item?id=41306658
A Python dict that can report which keys you did not use
Does this handle nested dicts (in pickles in sql, which I had to write code to survey one time)?
A queue-based traversal has flatter memory utilization for deeply nested dicts than a recursive traversal in Python without TCO.
Given a visitor pattern traversal, a visit() function can receive the node path as a list of path components, and update a Counter() with a (full,path,tuple) or "delimiter\.escaped.path" key.
Python collections.UserDict implements the methods necessary to proxy the dict Mapping/MutableMapping interface to self.data. For dicts with many keys, it would probably be faster to hook methods that mutate the UserDict.data dict like __setitem__, get, setdefault, update() and maybe __init__() in order to track which keys have changed instead of copying keys() into a set to do an unordered difference with a list.
React requires setState() for all mutations this.state because there's no way to hook dunder methods in JS: setState() updates this.state and then notifies listeners or calls a list of functions to run when anything in this.state or when a value associated with certain keys or nested keys in this.state changes.
FWIU ipyflow exposes the subscriber refcount/reflist but RxPy specifically does not: ipyflow/core/test/test_refcount.py: https://github.com/ipyflow/ipyflow/blob/master/core/test/tes...
Anyways,
For test assertions, unittest.mock MagicMock can track call_count and call_args_list on methods that mutate a dict like __getitem__ and get(). There's also mock_calls, which keeps an ordered list of the args passed: https://docs.python.org/3/library/unittest.mock.html
Reinstating memories' temporal context causes Sisyphus-like memory rejuvenation
ScholarlyArticle: "Reinstating memories’ temporal context at encoding causes Sisyphus-like memory rejuvenation" (2025) https://www.pnas.org/doi/full/10.1073/pnas.2505120122
K^4: Online Log Anomaly Detection via Unsupervised Typicality Learning
> Abstract: [...] (K^4) transforms arbitrary log embeddings into compact four-dimensional descriptors (Precision, Recall, Density, Coverage) using efficient k-nearest neighbor (k-NN) statistics. These descriptors enable lightweight detectors to accurately score anomalies without retraining. Using a more realistic online evaluation protocol, sets a new state-of-the-art (AUROC: 0.995-0.999), outperforming baselines by large margins while being orders of magnitude faster, with training under 4 seconds and inference as low as 4 us.
Tesla founder is disappointed canceled $25K EV and made 'dumpster-looking' truck
I like this quote better:
You can put the prototype of something on the road and kill people sometimes because it doesn’t work correctly, and that’s kind of OK? It’s not for me.
What happened to the existential risks of AI as a default talking point? When did that tune change?
One should not use a social media account listed as a corporate disclosure account for such purposes (as, for example, indicating a significant change in ownership status without consulting with the board, or to defame parties irrelevant to one's business).
The full article title is "Tesla founder is disappointed Musk canceled $25,000 EV and made ‘dumpster-looking’ truck" (2025) https://electrek.co/2025/07/28/tesla-founder-disapointed-mus...
More specific feedback about the Cyber truck flop debacle:
The market didn't want a remotely controllable by stolen phone APC.
The sharp edges on the bed area are a likely source of injury.
The sharp edges disqualify the cybertruck for sale in at least the UK because there are specific regulations about how rounded the body of a vehicle needs to be to spare pedestrians.
The dependency upon the sole - the largest - steel stamping machine in Texas is self-saboteurial.
What did market testing indicate about the "cyber punk" ethos? Is that your average EV purchaser in the US or worldwide?
I see only see 1 serious issue with CT as truck - can't access bed from sides. Yes there are some annoying things like huge A pillars or tonneu covering rearview mirror, but those aren't dealbreakers.
Actual dealbreaker is risk of people vandalizing your car.
From https://insideevs.com/news/749350/tesla-cybertruck-tech-expe... :
> Tesla Cybertruck's Stainless Steel Won't Be Used In Future EVs: Nearly every piece of Cybertruck tech is heading to future Teslas—except its stainless steel body.
Which drivetrain and other components are sold to external customers these days?
Per-wheel (radial axial flux, no rare earth) motors would probably sell for EV conversion and other applications
A boat cooler than the boat Thunder in "Thunder in Paradise" (1993) could utilize certain components for marine applications. Humanoid robots working docks would need to resist saltwater corrision
What could they do to pull it out with the broader market so abandoned?
Make a throwback truck or trucks. Wheel well flares.
You're a friend to the EV conversion axial torque and other rebalancing methods community, which is seeking to retrofit legacy trucks with batteries and torque
Bioplastic for the superstructure and bioplastic for the relatable ding-resistant panels.
To safety spec injection molded bioplastic panels would be a good start: https://en.wikipedia.org/wiki/Injection_moulding
Fusion induction welding as necessary.
Anode disconnect on thermal runaway by melting, swelling
Twisted SWCNT have sufficient density the lab to compete with Li(FEPO4) and sodium ion out of just carbon; it just needs a controlled process to scale it up.
Are you compatible with other accessories on the market?
What do AI analyses of hypercritical videos about the product say, in summary?
Or just make an electric pickup truck like Ford or Rivian.
F-150 accessory compatibility would be smart.
From "Battery-Electric Heavy-Duty Equipment: It's Sort of Like a Cybertruck" (2019) https://news.ycombinator.com/item?id=21626591 :
> For instance, their flagship product, the Dannar 4.00, can accept over 250 attachments from CAT, John Deere, or Bobcat.
There are scoop buckets with short forks on the front.
How to put a normal brush guard on it
How to mount a crane to the bed without puncturing a test load of beach balls
Slate Auto's customizable EV mini truck / SUV is smartly modular and thus easily customizable for various applications.
The Tesla skateboard design is also modular.
Skateboard (automotive platform) https://en.wikipedia.org/wiki/Skateboard_(automotive_platfor...
Getting decent error reports in Bash when you're using 'set -e'
Setting PS4 gets decent error reports with `set -x` (and `set -x -v`; `help set`).
Here's an excerpt that shows how to set PS4 from a main() in a .env shell script for configuring devcontainer userspace:
for arg in "${@}"; do
case "$arg" in
--debug)
export __VERBOSE=1 ;
#export PS4='+${LINENO}: ' ;
#export PS4='+ #${BASH_SOURCE}:${LINENO}:${FUNCNAME[0]:+${FUNCNAME[0]}()}:$(date +%T)\n+ ' ;
#export PS4='+ ${LINENO} ${FUNCNAME[0]:+${FUNCNAME[0]}()}: ' ;
#export PS4='+ $(printf "%-4s" ${LINENO}) | '
export PS4='+ $(printf "%-4s %-24s " ${LINENO} ${FUNCNAME[0]:+${FUNCNAME[0]}} )| '
#export PS4='+ $(printf "%-4s %-${SHLVL}s %-24s" ${LINENO} " " ${FUNCNAME[0]:+${FUNCNAME[0]}} )| '
;;
--debug-color|--debug-colors)
export __VERBOSE=1 ;
# red=31
export ANSI_FG_BLACK='\e[30m'
#export MID_GRAY_256='\e[38;5;244m' # Example: a medium gray
export _CRESET='\e[0m'
export _COLOR="${ANSI_FG_BLACK}"
printf "${_COLOR}DEBUG: --debug-color: This text is ANSI gray${_CRESET}\n" >&2
export PS4='+ $(printf "${_COLOR}%-4s %-24s%s |${_CRESET} " ${LINENO} "${FUNCNAME[0]:+${FUNCNAME[0]}}" )'
;;
esac
done
This, too: function error_handler {
echo "Error occurred on line $(caller)" >&2
awk 'NR>L-4 && NR<L+4 { printf "%-5d%3s%s\n",NR,(NR==L?">>>":""),$0 }' L=$1 $0 >&2
}
if (echo "${SHELL}" | grep "bash"); then
trap 'error_handler $LINENO' ERR
fi(I'm sure this is lovely Bash, but for all the people who rejected Perl for its modem line noise vibe...what do ya think of this?)
As an aside, I actually wonder if Bash's caller() was inspired by Perl's.
There is also Carp and friends, plus Data::Dumper when you not only need the stack trace but also the state of objects and data structures. Which is something that I don't think Bash can really do at all.
There are no objects in bash. There are indexed and associative arrays and both can be iterated over like so:
for value in "${SOMEARRAY[@]}"; do
echo "${value}"
done
or with the help of the keys: for key in "${!SOMEARRAY[@]}"; do
echo "key: ${key} - value: ${SOMEARRAY["${key}"]}"
done
If you want to dump the data of any variable you can just use declare -p declare -p SOMEARRAY
and you get something like this: declare -a SOMEARRAY=([0]="a" [1]="b" [2]="c" [3]="d" [4]="e" [5]="f")
What you can do, if you have a set of variables and you want them to be "dumped", is this:Let's "dump" all variables that start with "BASH":
for k in "${!BASH@}"; do
declare -p "${k}"
done
Or one could do something like this: for k in "${!BASH@}"; do
echo "${k}: ${!k}"
done
But the declare option is much more reliable as you don't have to test for the variable's type.Are you asking me to defend shell script syntax, or are you criticizing this except from a shell script?
The awk and printf are as obscure and unreadable as Perl, but still probably faster than just starting Perl.
Ironically, in terms of portability, it's probably more likely that awk and printf are installed than Python (or Perl). This application doesn't need Python in the (devcontainer) container, and nobody does sysadmin scripts with lua (which can't `export VARNAME` for outer shells) so shell scripting is justified though indeed arcane.
Getopt is hardly more understandable than a few loops through $@ with case statements.
I don't understand the relevance of other tools to "getting decent error reports in Bash"?
There are great logging (and TAP testing) libraries in Python, but that doesn't solve for debugging Bash?
There is at least one debugger for Bash scripts.
bashdb is a debugger for shell scripts: https://github.com/Trepan-Debuggers/bashdb
vscode-bash-debug is a frontend for bashdb: https://github.com/rogalmic/vscode-bash-debug
Python audio processing with pedalboard
BespokeSynth is also built on JUCE.
BespokeSynth supports VST3, AudioUnit, LV2,: https://github.com/BespokeSynth/BespokeSynth/issues/1614
One day, I found a number of open source patch bay implementations. They may be useful for building a GUI with pedalboard:
- https://github.com/Houston4444/HoustonPatchbay :
> [HoustonPatchBay is] a patchbay for JACK used by RaySession and Patchance, usable by other python Qt5 softwares.
- RaySession: https://github.com/Houston4444/RaySession is a patchbay for JACK
- Patchance: https://github.com/Houston4444/Patchance is JACK patchbay gui w/ ALSA MIDI support
> It is a direct alternative to Catia or Patchage
- org.pipewire.helvum: https://gitlab.freedesktop.org/pipewire/helvum https://flathub.org/apps/org.pipewire.Helvum :
> Helvum is a GTK-based patchbay for pipewire, inspired by the JACK tool catia.
- easyeffects: https://github.com/wwmm/easyeffects ; pipewire + GStreamer -> just pipewire
awesome-node-based-uis > Audio: https://github.com/xyflow/awesome-node-based-uis#audio
Show HN: Convert from MIDI file to ASCII tablature (and more)
Hi folks,
About seven months ago, via HN, I got nerdsniped into a silly guitar transcription problem and made a bunch of really senseless code but what came out of it was what I thought at the time could potentially be pretty useful - a guitar fretboard mapper and fingering scoring algorithm.
So as of yesterday morning I've finally put those bits of code to "good" use, creating gtrsnipe to convert between MIDI files (.mid) and ASCII tab (as well as VexTab and ABC notation) and any combination/direction among the set of formats.
gtrsnipe tries to intelligently find the best neck and fingering positions using a note to fretboard mapper and a scoring algorithm that is unavoidably shaped by my subjective opinions and skills as a player but it does its best to avoid objectively impossible fingerings.
See the example tabs and usage in the README and please, try your own transcriptions from MIDI and if you love or hate the arrangement it gives you, I'd love to hear about it so I can further refine the scoring algorithm.
Thanks!
I looked for similar tools;
Looks like tayuya is also written in Python, on mido and music21. It has a "get all notes to play" feature, mentions LilyPond tab output as a todo, and has a get_key(midi) method built on music21: https://github.com/vipul-sharma20/tayuya#get-all-notes-to-pl...
tayuya.tabs:note_nearest_to_fret: https://github.com/vipul-sharma20/tayuya/blob/master/tayuya/...
Kord has a fretboard visualizer tool: https://github.com/synestematic/kord#fretboard-tool
Textual is another way to create CLIs for Python scripts.
What about tab playback and CLI-based scrubbing?
There was a post a week or so ago about an LWN article about spotify/pedalboard, which is written in Python and built on JUCE (C++) and supports VST3 and LV2 plugins like a MIDI player or a wavetable synth and a Guitarix effects rack: https://news.ycombinator.com/item?id=44604024#44648290
I designed my own fast game streaming video codec – PyroWave
I love this. The widely used standards for video compression are focused on compression efficiency, which is important if you’re netflix or youtube, but sometimes latency and low complexity is more important. Even if only to play around and learn how a video codec actually works.
> The widely used standards for video compression are focused on compression efficiency, which is important if you’re netflix or youtube, but sometimes latency and low complexity is more important.
That's a misconception. All modern video codecs (i.e. H.264/AVC, H.265/HEVC, AV1) have explicit, first-class tools, profiles, and reference modes aimed at both low- and high-resolution low‑latency and/or low‑complexity use.
AV1: Improving RTC Video Quality at Scale: https://atscaleconference.com/av1-improving-rtc-video-qualit...
There are hardware AV1 encoders and decoders.
Objective metrics and tools for video encoding and source signal quality: netflix/vmaf, easyVmaf, psy-ex/metrics, ffmpeg-quality-metrics,
Ffmpeg settings for low-latency encoding:
# h264, h265
-preset ultrafast
-tune zerolatency
# AV1
-c:v libsvtav1
-preset 8
-svtav1-params tune=0:latency-mode=1
-g 60
It's possible to follow along with ffmpeg encoding for visual inspection without waiting for the whole job to complete with the tee muxer and ffplay.GPU Screen Recorder and Sunlight server expose some encoder options in GUI forms, but parameter optimization is still manual; nothing does easyVmaf with thumbnails of each rendering parameter set with IDK auto-identification of encoding artifacts.
Ardour has a "Loudness Analyzer & Normalizer" with profiles for specific streaming services.
What are good target bitrates for low-latency livestreaming 4k with h264, h265 (HDR), and AV1?
FFmpeg Explorer is made with ffmpeg.wasm: https://github.com/antiboredom/ffmpeg-explorer .. web: https://ffmpeg.lav.io/
Physicists disagree on what quantum mechanics says about reality
Last month was the 100 year anniversary of the Heisenberg uncertainty principle. From https://x.com/NobelPrize/status/1950245213194137724 :
> On this day 100 years ago physics laureate Werner Heisenberg submitted a paper that revolutionised quantum mechanics.
> Heisenberg was only 23 years old when he submitted the paper "Quantum mechanical reinterpretation of kinematic and mechanical relations" (1925)
Umdeutung paper: https://en.wikipedia.org/wiki/Umdeutung_paper
Uncertainty principle: https://en.wikipedia.org/wiki/Uncertainty_principle
> The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
Ask HN: Should they call it the 'America party'?
They can't register a new party because the current administration has failed to nominate a sufficient number of FEC commissioners.
America - named after Amerigo Vespucci who was an Italian explorer - is composed of North America and South America.
Writing memory efficient C structs
Is it possible to apply these optimizations to Arrow?
Arrow's struct is called StructArray. Fields of StructArray have a StructType.
StructType has .bit_width and .byte_width attrs in Python and probably the other implementations too: https://arrow.apache.org/docs/python/generated/pyarrow.Struc...
Arrow supports bitfields with BooleanArray, and enums with categoricals but
"BUG: Categorical columns using the PyArrow backend requires 4x more memory" (open) https://github.com/pandas-dev/pandas/issues/58062 :
> On disk Parquet appears to store the category data as logical type String which is compressed with snappy and encoded
Arrow Flight RPC handles nested structs with enums over the wire somehow too FWIU
ACM Transitions to Full Open Access
I've greatly appreciated the ACM's movements toward open access, but I have to ask:
What's the license?
The Berlin Declaration that defined Open Access https://openaccess.mpg.de/Berlin-Declaration defines it as follows:
> 1. Open access contributions must satisfy two conditions:The author(s) and right holder(s) of such contributions grant(s) to all users a free, irrevocable, worldwide, right of access to, and a license to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose, subject to proper attribution of authorship (community standards, will continue to provide the mechanism for enforcement of proper attribution and responsible use of the published work, as they do now), as well as the right to make small numbers of printed copies for their personal use.
> 2. A complete version of the work and all supplemental materials, including a copy of the permission as stated above, in an appropriate standard electronic format is deposited (and thus published) in at least one online repository using suitable technical standards (such as the Open Archive definitions) that is supported and maintained by an academic institution, scholarly society, government agency, or other well-established organization that seeks to enable open access, unrestricted distribution, inter operability [sic], and long-term archiving.
This page is all about #2. What's #1?
I'm delighted to be able to read and share the classic CACM articles that have shaped the history of informatics, thanks to the ACM's policy changes over the last few years. The other day, for example, I was reading Liskov's paper on CLU in which she introduces the abstract data type: https://dl.acm.org/doi/10.1145/800233.807045
But, as far as I can tell, neither that web page nor the PDF linked from it has a license granting "a free, irrevocable, worldwide, right of access to, and a license to copy, use, distribute, transmit and display the work publicly and to make and distribute derivative works, in any digital medium for any responsible purpose." So, if I post it on my personal web site, or upload it to WikiSource or the Internet Archive, I'm still at risk of copyright lawsuits. And until I can do that, I only have access to the paper as long as CloudFlare thinks I'm human.
That's the problem Open Access is designed to solve.
New articles are Creative Commons (CC-BY or CC-BY-NC-ND).
CC BY 4.0: Attribution 4.0 International: https://creativecommons.org/licenses/by/4.0/
CC BY-NC-ND 4.0: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International: https://creativecommons.org/licenses/by-nc-nd/4.0/
The new articles aren't important.
The ACM is probably never again going to publish a paper as influential as Liskov's paper I mentioned above, or Knuth's "Structured Programming With go to Statements", or "Go To Statement Considered Harmful" https://dl.acm.org/doi/pdf/10.1145/362929.362947, or Schorre's "META-II: A Syntax-Oriented Compiler Writing Language" https://dl.acm.org/doi/pdf/10.1145/800257.808896, or Ken Thompson's "Regular Expression Search Algorithm" https://dl.acm.org/doi/pdf/10.1145/363347.363387, or Dan Ingalls on "The Smalltalk-76 programming system design and implementation" https://dl.acm.org/doi/10.1145/512760.512762.
Papers like those are the ones that we need to protect our ability to archive and distribute. Not David Geerts's "The Transformative Power of Inspiration" from the current issue of CACM https://cacm.acm.org/careers/the-transformative-power-of-ins.... (I am not making this up.) Thompson was competing with, let's say, Mooers and Schorre; Geerts has decided instead to compete with Jesus, the Buddha, and Norman Vincent Peale, and my brief reading of the article does not offer much hope for his prospects.
It seems safe to say that in 30 or 100 years' time nobody will cite Geerts's article as a turning point in the human understanding of inspiration, so if it's lost due to copyright restrictions, it probably won't matter that much.
At the other extreme, scholars seeking to understand the historical origins of object-orientation or personal computers would be crippled without access to material like Ingalls's paper. I'm not speculating—I'm speaking from experience, because lacking that access, I grew up thinking C++ was object-oriented!
But what do we see on the current version of the Ingalls paper that the ACM's web server just gave me? A note added in 02002 prohibiting public archival and redistribution:
> Permission to make digital or hard copies of part or all of this work or personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee.
> probably never again going to publish
Does this mean that ScholarlyArticles that authors choose to publish with ACM can be uploaded to e.g. ArXiv in full instead of only the preprints?
(If you upload PostScript and PDF to ArXiv, they can generate an HTML5 rendering of the article.)
Open access > Effects on scholarly publishing: https://en.wikipedia.org/wiki/Open_access
I learned OO from lots of great resources, and may have been disadvantaged to have have never read Ingall's paper; which isn't yet cited in Wikipedia's OO page under History.
Object orientated programinng > History: https://en.wikipedia.org/wiki/Object-oriented_programming#Hi...
"'Considered harmful' considered harmful"
Considered harmful: https://en.wikipedia.org/wiki/Considered_harmful
Edsger Dijkstra published "Go To Statement Considered Harmful" (1998) with CACM.
Anyone can upload a CC-BY article in full to anywhere, and anyone can upload a CC-BY-NC-ND article to anywhere noncommercial. ArXiv only accepts uploads from authors, though.
The "history" section of the Wikipedia article cites Kay's excellent "Early History of Smalltalk" https://dl.acm.org/doi/pdf/10.1145/155360.155364 which of course does cite Ingalls's 01978 POPL paper, as well as 17 other papers published by the ACM, by my count, more than any other single publisher except Xerox. That section also highlights the ACM conference OOPSLA and cites Borning's "Thinglab", published at OOPSLA. So access to historical ACM papers is extremely important for understanding the history of object-orientation.
EPA wants to eliminate regulation for greenhouse gases
How does anyone still support this pile of turd?!?
You act like climate change is widely accepted in America. Ten years ago we had the “snowball in the Senate “
Did they swap ours with a banana republic idiot that thinks more money now is more important than long term environmental sustainability? Where is the birth certificate!?
I guess we all have to choke down poisonous mass air pollution now due to this policy.
I think people will remember having been able to see the mountains in Mountain View for awhile during COVID.
UCS: Union of Concerned Scientists has cited studies which show over 97% consensus on climate change.
Intergovernmental Panel on Climate Change > Assessment reports: https://en.wikipedia.org/wiki/Intergovernmental_Panel_on_Cli...
Scientific consensus on climate change: https://en.wikipedia.org/wiki/Scientific_consensus_on_climat...
I'm not sure what use of force in self defense is justified in response to regression to mass poisoning by industry and government policy - in castle doctrine stand your ground states at least - in regards to greenhouse gases.
A complex system designed from scratch never works and cannot be made to work
Systemantics: https://en.wikipedia.org/wiki/Systemantics :
> The term systemantics is a commentary on prior work by Alfred Korzybski called general semantics which conjectured that all systems failures could be attributed to a single root cause – a failure to communicate
Which character says "what we have here is a failure to communicate?" in the film 'Cool Hand Luke'?
It looks like the full OT quote has another sentence:
> A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.
Is this why we are encouraged to write a test, ensure that it doesn't pass yet by running it, write code to make the test pass, ensure that there are sufficient tests and that they all pass, and only then commit to a pull request branch, so that the centralized build runner will verify that all of the tests pass before code is merged onto the release branch?
More honey bees dying, even as antibiotic use halves
From "Scientists identify culprit behind biggest-ever U.S. honey bee die-off" https://news.ycombinator.com/item?id=44434497 :
> "Viruses and vectors tied to honey bee colony losses" (2025) https://www.biorxiv.org/content/10.1101/2025.05.28.656706v1....
ScholarlyArticle: "Impacts of antibiotic use, air pollution and climate on managed honeybees in Canada" (2025) https://www.nature.com/articles/s41893-025-01603-y :
> Abstract: [...] Notably, this decrease was inversely associated with rising overwintering mortality rates, suggesting that withdrawal of antibiotics in the absence of effective alternatives may negatively impact colony health. Furthermore, multivariate analysis accounting for environmental confounders (based on 119,244 data points collected from 234 unique locations across Canada) identified nitrogen dioxide (NO2), a common air pollutant from diesel exhaust, as a strong predictor of mortality. This finding warrants urgent attention given that NO2 can degrade floral odours, rendering them undetectable to honeybees during foraging flights.
Programming of refractive functions
"Programming of refractive functions" (2025) https://www.nature.com/articles/s41467-025-62230-x :
> Abstract: [...] In addition to monochrome RFG designs, we also report wavelength-multiplexed refractive functions, where a distinct refractive function is implemented at each wavelength through the same engineered material volume, i.e., the permutation of light refraction is switched from one desired function to another function by changing the illumination wavelength. As experimental proofs of concept, we demonstrate permutation and negative refractive functions at the terahertz part of the spectrum using 3D-printed materials. Arbitrary programming of refractive functions enables new design capabilities for optical materials, devices and systems.
Irrelevant facts about cats added to math problems increase LLM errors by 300%
Basic equations of semiconductor device physics [pdf]
"The 5 basic equations of semiconductor device physics" (2008) https://web.stanford.edu/~kimth/www-mit/6.012/TheFiveEquatio...
Notes re: "Brandon's circuit simulator", which doesn't claim to model vortices in superconductors or the Quantum Anomalous Hall Effect, for example; https://news.ycombinator.com/item?id=43942279#43948096 :
> Which other simulators show electron charge density and heat dissipation?
How are these five equations sufficient or insufficient to model electrons in semiconductors?
Terminal app can now run full graphical Linux apps in the latest Android Canary
Does `ls -Z` work in Android Terminal?
(SELinux has run in enforcing mode on Android devices since Android 4.4, which was released in 2013. But Android in ChromeOS only runs SELinux in the guest VM FWIU)
It's a virtual machine, so its SELinux support should be separate from what the host is doing
SELinux on a host should restrict KVM (and X/Wayland, and the sound server,).
SELinux in a guest [VM or container] should restrict processes in the guest from interfering with other processes and resources in the guest.
IMHO, Nested UIDs like uid1.subuid1.subuid2 would be better for rootless containers than root-writeable /etc/subuids.
Hierarchical Reasoning Model
I advise scepticism.
This work does have some very interesting ideas, specifically avoiding the costs of backpropagation through time.
However, it does not appear to have been peer reviewed.
The results section is odd. It does not include include details of how they performed the assesments, and the only numerical values are in the figure on the front page. The results for ARC2 are (contrary to that figure) not top of the leaderboard (currently 19% compared to HRMs 5% https://www.kaggle.com/competitions/arc-prize-2025/leaderboa...)
Skepticism is an understatement. There are tons of issues with this paper. Why are they comparing results of their expert model that was trained from scratch on a single task to general purpose reasoning models? It is well established in the literature that you can still beat general purpose LLMs in narrow domain tasks with specially trained, small models. The only comparison that would have made sense is one to vanilla transformers using the same nr of parameters and trained on the same input-output dataset. But the paper shows no such comparison. In fact, I would be surprised if it was significantly better, because such architecture improvements are usually very modest or not applicable in general. And insinuating that this is some significant development to improve general purpose AI by throwing in ARC is just straight up dishonest. I could probably cook up a neural net in pytorch in a few minutes that beats a hand-crafted single task that o3 can't solve in an hour. That doesn't mean that I made any progress towards AGI.
Have you spent much time with the ARC-1 challenge? Their results on that are extremely compelling, showing results close to the initial competition's SOTA (as of closing anyway) with a tiny model and no hacks like data augmentation, pretraining, etc that all of the winning approaches leaned on heavily.
Your criticism makes sense for the maze solving and sudoku sets, of course, but I think it kinda misses the point (there are traditional algos that solve those just fine - it's more about the ability of neural nets to figure them out during training, and known issues with existing recurrent architectures).
Assuming this isn't fake news lol.
Looking at the code, there is a lot of data augmentation going on there. For the Sudoku and ARC data sets, they augment every example by a factor of 1,000.
https://github.com/sapientinc/HRM/blob/main/dataset/build_ar...
That's fair, they are relabelling colours and rotating the boards. I meant more like mass generation of novel puzzles to try and train specific patterns. But you are right that technically there is some augmentation going on here, my bad.
> That's fair, they are relabelling colours and rotating the boards.
Photometric augmentation, Geometric augmentation
> I meant more like mass generation of novel puzzles to try and train specific patterns.
What is the difference between Synthetic Data Generation and Self Play (like AlphaZero)? Don't self play simulations generate synthetic training data as compared to real observations?
I don't know the jargon, but for me the main thing is the distinction between humans injecting additional bits of information into the training set vs the algorithm itself discovering those bits of information. So self-play is very interesting (it's automated as part of the algorithm) but stuff like generating tons of novel sudoku puzzles and adding them to the training set is less interesting (the information is being fed into the training set "out-of-band", so to speak).
In this case I was wrong, the authors are clearly adding bits of information themselves by augmenting the dataset with symmetries (I propose "symmetry augmentation" as a much more sensible phrase for this =P). Since symmetries share a lot of mutual information with each other, I don't think this is nearly as much of a crutch as adding novel data points into the mix before training, but ideally no augmentation would be needed.
I guess you could argue that in some sense it's fair play - when humans are told the rules of sudoku the symmetry is implicit, but here the AI is only really "aware" of the gradient.
Symmetry augmentation sounds good for software.
Traditional ML CV Computer Vision research has perhaps been supplanted by multimodal LLMs that are trained on image analysis annotations. (CLIP, Brownian-motion based Dall-E and Latent Diffusion were published in 2021. More recent research: Brownian Bridges, SDEs, Lévy processes. What are foundational papers in video genai?)
TOPS are now necessary.
I suspect that existing CV algos for feature extraction would also be useful for training LLMs. OpenCV, for example, has open algorithms like ORB (Oriented FAST and Rotated BRIEF), KAZE and AKAZE, and SIFT since 2020. SIFT "is highly robust to rotation, scale, and illumination changes".
But do existing CV feature extraction and transform algos produce useful training data for LLMs as-is?
Similarly, pairing code and tests with a feature transform at training time probably yields better solutions to SWE-bench.
Self Play algos are given rules of the sim. Are self play simulations already used as synthetic training data for LLMS and SLMs?
There are effectively rules for generating synthetic training data.
The orbits of the planets might be a good example of where synthetic training data is limited and perhaps we should rely upon real observations at different scales given cost of experimentation and confirmations of scale invariance.
Extrapolations from orbital observations and classical mechanics failed to predict the Perihelion precession of Mercury (the first confirmation of GR General Relativity).
To generate synthetic training data from orbital observations where Mercury's 43 arcsecond deviation from Newtonian mechanics was disregarded as an outlier would result in a model overweighted by existing biases in real observations.
Tests of general relativity > Perihelion precession of Mercury https://en.wikipedia.org/wiki/Tests_of_general_relativity#Pe...
Okay, haha, I'm not sure what we're doing here.
Stealth Genetic Switch in Mosquitoes Halts Malaria Spread
ScholarlyArticle: "Driving a protective allele of the mosquito FREP1 gene to combat malaria" (2025) https://www.nature.com/articles/s41586-025-09283-6
Is there a similar drive for West Nile Virus?
"Treatment and Prevention of West Nile Virus Disease" https://www.cdc.gov/west-nile-virus/hcp/treatment-prevention...
/? West Nile Virus WNV: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
New AI architecture delivers 100x faster reasoning with just 1,000 examples
sapientinc/HRM: Hierarchical Reasoning Model: https://github.com/sapientinc/HRM :
> [...] Inspired by the hierarchical and multi-timescale processing in the human brain, we propose the Hierarchical Reasoning Model (HRM), a novel recurrent architecture that attains significant computational depth while maintaining both training stability and efficiency. HRM executes sequential reasoning tasks in a single forward pass without explicit supervision of the intermediate process, through two interdependent recurrent modules: a high-level module responsible for slow, abstract planning, and a low-level module handling rapid, detailed computations. With only 27 million parameters, HRM achieves exceptional performance on complex reasoning tasks using only 1000 training samples. The model operates without pre-training or CoT data, yet achieves nearly perfect performance on challenging tasks including complex Sudoku puzzles and optimal path finding in large mazes. Furthermore, HRM outperforms much larger models with significantly longer context windows on the Abstraction and Reasoning Corpus (ARC), a key benchmark for measuring artificial general intelligence capabilities. These results underscore HRM’s potential as a transformative advancement toward universal computation and general-purpose reasoning systems.
"Hierarchical Reasoning Model" (2025) https://arxiv.org/abs/2506.21734 .. https://news.ycombinator.com/item?id=44699452 (9 hrs ago)
"ARC-AGI-2: A New Challenge for Frontier AI Reasoning Systems" (2025) https://arxiv.org/abs/2505.11831
ARC Prize: $1m: https://arcprize.org/ :
> At ARC Prize, our mission is to serve as a North Star towards AGI through enduring benchmarks, directing efforts towards systems capable of general intelligence and significantly compressing the timeline for scientific breakthroughs
ARC-AGI-3: Interactive Reasoning Benchmark (2026) https://arcprize.org/arc-agi/3/ :
> The first eval that measures human-like intelligence in AI. [w/ HuggingFace]
Smallest particulate matter air quality sensor for ultra-compact IoT devices
Waiting for Achim Haug of AirGradient! What’s your thoughts on this?
Hahah. Thanks for calling me!
We actually have a sample of the Bosch in our office but haven’t come along to test it yet. Maybe with this call, I will get our team onto it.
The form factor has pros and cons in my opinion. The size and lower energy consumption definitely opens new applications but the problem is that it needs a clear field of view to do the measurements.
This could in turn restrict the applicability, eg as a wearable sensor.
In general I think it’s great to see innovations in the PM sensor field but often minimizations go on costs of accuracy.
We saw that for example with the Sensirion photo acoustic CO2 SCD4x sensor that is tiny but needs more black box algorithms to compensate for certain environmental conditions that then limits the range of applications.
Neuroscience study shows the brain emits light through the skull
"Exploring ultraweak photon emissions as optical markers of brain activity" (2025) https://www.cell.com/iscience/fulltext/S2589-0042(25)00279-2 :
> Highlights:
> [...]
> - Optical readouts correlate with evoked neuroelectric oscillations across tasks
> - Label-free photoencephalography represents a novel method for brain monitoring
Strain-induced crumpling of graphene oxide to achieve fast extraction of H2, CO2
"Strain-induced crumpling of graphene oxide lamellas to achieve fast and selective transport of H2 and CO2" (2025) https://www.nature.com/articles/s41565-025-01971-8
NewsArticle: "New approach to engineering crumpled GO membranes for separating hydrogen and other gases" (2025) https://phys.org/news/2025-07-approach-crumpled-membranes-hy...
Google Has a Long Duration Energy Storage Message for Fossil Fuels
The article is about LDES Long-Duration Energy Storage.
(I remember learning of Google's early Bloom Box investments many years ago; instead of diesel generator reserves which expire.)
Energy Dome has a CO2 -based LDES which Google has invested in.
Energy Dome's requires compressing CO2 to supercriticality.
> [...] pumped storage hydropower still accounts for more than 90% of utility scale storage in the US, long duration or otherwise.
Additional forms of CO2 storage and capture?
- Concrete that absorbs CO2; developed with Allegro-FM AI: https://news.ycombinator.com/item?id=44677145
- Bicarbonate + CO2 + (...) => formate at 96% efficiency: https://news.ycombinator.com/item?id=38097663
/? CO2 (in my hnlog)
- Butter! "Butter made from CO2 could pave the way for food without farming" https://news.ycombinator.com/item?id=40918803
There's plenty of water for data centers
Aquifers are running dry.
Some crops require a lot of water for very little nutrient returns.
Agriculture and Drilling and Mining and Datacenters use a lot of water.
But datacenters don't need to use so much water.
A recent Microsoft datacenter shows that datacenters can fill up with water initially and then not waste the steamed, sterilized, demineralized water into the atmosphere as waste to manage waste heat: "Next-generation datacenters consume zero water for cooling" (2024) https://news.ycombinator.com/item?id=42376406
> this design will avoid the need for more than 125 million liters of water per year per datacenter
How else can datacenters reduce their excessive and inefficient new resource consumption requirements?
From "The Drying Planet" (2025-07-25) https://www.propublica.org/article/water-aquifers-groundwate... :
> [Global map of Water Loss (in mmSLE)]
> In the far north, the detected loss is due largely to glaciers melting and subarctic lakes drying.
> But farther south — where most people live — it is largely the race to suck groundwater from aquifers that is removing the water from the continents.
> So much groundwater is now being pumped that it is filling the oceans as it drains off land, becoming one of the largest drivers of global sea level rise.
[...]
> Water From Land Has Become a Leading Driver of Sea Level Rise
> Most of the water lost from drying regions is from groundwater pumping, which ultimately shifts fresh water from aquifers into the oceans.
Marine Spongiibacter Exopolysaccharide Causes Potent Anti Cancer Pyroptosis
"Ocean Sugar Makes Cancer Cells Explode" (2025) https://scitechdaily.com/ocean-sugar-makes-cancer-cells-expl... :
> In research published in The FASEB Journal, investigators purified a long-chain sugar molecule, or exopolysaccharide, from deep-sea bacteria and demonstrated that it triggers pyroptosis to inhibit tumor growth.
> The compound, called EPS3.9, consists of mannose and glucose and is produced by the Spongiibacter nanhainus CSC3.9 bacterial strain and other members of the genus Spongiibacter. Mechanistic analyses showed that EPS3.9 can directly target 5 membrane phospholipid molecules and exert tumor toxicity by stimulating pyroptosis in human leukemia cells. EPS3.9 also had significant anti-tumor effects in the mice with liver cancer and activated anti-tumor immune responses.
“A Novel Exopolysaccharide, Highly Prevalent in Marine Spongiibacter, Triggers Pyroptosis to Exhibit Potent Anticancer Effects” (2025) DOI: 10.1096/fj.202500412R https://faseb.onlinelibrary.wiley.com/doi/10.1096/fj.2025004...
Allegro-FM: An Equivariant Foundation Model for Exascale Molecular Dynamics Sims
"Concrete that lasts centuries and captures carbon? AI just made it possible" (2025) https://www.sciencedaily.com/releases/2025/07/250723045707.h...
PSA: SQLite WAL checksums fail silently and may lose data
Do the sqlite replication systems depend upon WAL checksums?
Merkle hashes would probably be better.
google/trillian adds Merkle hashes to table rows.
sqlite-parquet-vtable would workaround broken WAL checksums.
sqlite-wasm-http is almost a replication system
Re: "Migration of the [sqlite] build system to autosetup" https://news.ycombinator.com/item?id=41921992 :
> There are many extensions of SQLite; rqlite (Raft in Go,), cr-sqlite (CRDT in C), postlite (Postgres wire protocol for SQLite), electricsql (Postgres), sqledge (Postgres), and also WASM: sqlite-wasm, sqlite-wasm-http, dqlite (Raft in Rust),
> awesome-sqlite
From "Adding concurrent read/write to DuckDB with Arrow Flight" https://news.ycombinator.com/item?id=42871219 :
> cosmos/iavl is a Merkleized AVL tree. https://github.com/cosmos/iavl
/? Merkle hashes for sqlite: https://www.google.com/search?q=Merkle+hashes+for+SQlite
A git commit hash is basically a Merkle tree root, as it depends upon the previous hashes before it.
Merkle tree: https://en.wikipedia.org/wiki/Merkle_tree
(How) Should merkle hashes be added to sqlite for consistency? How would merkle hashes in sqlite differ from WAL checksums?
Cerebras launches Qwen3-235B, achieving 1.5k tokens per second
> Qwen3-235B uses an efficient mixture-of-experts architecture that delivers exceptional compute efficiency, enabling Cerebras to offer the model at $0.60 per million input tokens and $1.20 per million output tokens—less than one-tenth the cost of comparable closed-source models.
$ 0.60/million input tokens
$ 1.20/million output tokens
How many minutes of 4K YouTube HDR video is that equivalent to in kWh of energy usage?> Concurrent with this launch, Cerebras has quadrupled its context length support from 32K to 131K tokens—the maximum supported by Qwen3-235B.
Fourier lightfield multiview stereoscope for large field-of-view 3D imaging
"Fourier lightfield multiview stereoscope for large field-of-view 3D imaging in microsurgical settings" (2025) https://www.spiedigitallibrary.org/journals/advanced-photoni... :
> Abstract: We present the Fourier lightfield multiview stereoscope (FiLM-Scope). This imaging device combines concepts from Fourier lightfield microscopy and multiview stereo imaging to capture high-resolution 3D videos over large fields of view. The FiLM-Scope optical hardware consists of a multicamera array, with 48 individual microcameras, placed behind a high-throughput primary lens. This allows the FiLM-Scope to simultaneously capture 48 unique 12.8 megapixel images of a 28 × 37 mm field-of-view, from unique angular perspectives over a 21 deg × 29 deg range, with down to 22μm lateral resolution. We also describe a self-supervised algorithm to reconstruct 3D height maps from these images. [ to 11 μm ] [ ... ]
48 x 12.8 megapixels = 614.4 megapixel lightfield microscope
Impacts of adding PV solar system to internal combustion engine vehicles
A similar question:
How large does a solar panel array have to be on a solar laser crop weeder, and how much acreage can it cover on a sunny day?
Is there potential to optimize solar beyond the perceived limits?
> Is there potential to optimize solar beyond the perceived limits?
There is.
The albedo of solar panel products varies with the coating and level of dirt.
There are waterless methods of cleaning solar panels.
TPV Thermo-Photovoltaic (TPV) cells generate some energy from infrared (heat). Regular PV cells waste the heat difference.
The Shockley-Quiesser limit is only conjectured to be applicable to single-junction cells.
Shockley-Quiesser limit > Exceeding the limit: https://en.wikipedia.org/wiki/Shockley%E2%80%93Queisser_limi...
"How to cut U.S. residential solar costs in half" (2025) https://news.ycombinator.com/item?id=44551633
What is X-Forwarded-For and when can you trust it? (2024)
From the article: https://httptoolkit.com/blog/what-is-x-forwarded-for/ :
> Dropping all external values like this is the safest approach when you're not sure how secure and reliable the rest of your call chain is going to be. If other proxies and backend apps are likely to blindly trust the incoming information, or generally make insecure choices (which we'll get into more later) then it's probably safest to completely replace the X-Forwarded-For header at that outside-world facing reverse proxy, and ditch any untrustworthy data in the process.
X-Forwarded-For: https://en.wikipedia.org/wiki/X-Forwarded-For :
> Just logging the X-Forwarded-For field is not always enough as the last proxy IP address in a chain is not contained within the X-Forwarded-For field, it is in the actual IP header. A web server should log both the request's source IP address and the X-Forwarded-For field information for completeness
HTTP header injection: https://en.wikipedia.org/wiki/HTTP_header_injection
This OWASP page has a list of X-Forwarded-For and X-FORWARDED-foR and similar headers; "Headers for IP Spoofing" https://owasp.org/www-community/pages/attacks/ip_spoofing_vi...
A sufficient WAF should detect all such attempts.
The X-Forwarded-For Wikipedia article mentions that RFC 7239 actually standardizes the header and parsing:
Forwarded: for=192.0.2.60;proto=http;by=203.0.113.43
Forwarded: for="[2001:db8::1234]"
RFC 7239: "Forwarded HTTP Extension" (2014):
https://www.rfc-editor.org/rfc/rfc7239Erythritol linked to brain cell damage and stroke risk
> erythritol is a sugar alcohol
From https://news.ycombinator.com/item?id=43299867 :
>> "Cyclodextrin promotes atherosclerosis regression via macrophage reprogramming" (2016) https://www.science.org/doi/10.1126/scitranslmed.aad6100
>> "Powdered Booze Could Fix Your Clogged Arteries" (2016) https://www.popsci.com/compound-in-powdered-alcohol-can-also...
> FWIU, beta-cyclodextrin is already FDA approved, and injection of betacyclodextrin reversed arterio/atherosclerosis; possibly because our arteries are caked with sugar alcohol and beta-cyclodextrin absorbs alcohol
How would you compare it with alpha-cyclodextrin? Are these available in good quality on Amazon?
Have you been taking beta-cyclodextrin for a while? In what dose?
I have never taken beta cyclodextrin for any indication. I thought I would relay the study and that it's already approved for human use.
FWIU when the sugar industry maligned fat in the US in the TODO, food manufacturers replaced fat in "reduced fat" foods with fake sugar substitutes which each have harms, high fructose corn syrup, or molasses.
What percentage of cardiovascular "plaque" is sugar alcohol and thus apparently treatable with β or α cyclodextrin, in controls and patients with conditions like Arteriosclerosis and Atherosclerosis?
Simulating hand-drawn motion with SVG filters
From "SVGs that feel like GIFs" https://news.ycombinator.com/item?id=44498133#44501917 :
> /? svg animation: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Engineers achieve efficient integration of quantum dot lasers on silicon chips
"Quantum Dot DBR Lasers Monolithically Integrated on Silicon Photonics by In-Pocket Heteroepitaxy" (2025) https://ieeexplore.ieee.org/document/10944565
"Scalable and Monolithic Integration of Quantum Dot Lasers for Silicon Photonics" (2025) https://ieeexplore.ieee.org/document/11081542
Qlass: VQE on glass and other photonic quantum devices
"Variational approach to photonic quantum circuits via the parameter shift rule" (2024) https://arxiv.org/abs/2410.06966
VQE: Variational Quantum Eigensolver
How to cut U.S. residential solar costs in half
> Birch points to Australia, where he said the average 7 kW solar array with a 7 kW battery costs $14,000. That equates to $2.02 per W, with batteries included.
$ 2.02 per Watt
> In the United States, that same solar and battery installation averages $36,000, said Birch. Permitting alone can take two to six months, and the cost per watt of a solar plus storage installation is up to 2.5 times the Australian price, landing at $5.18 per W. $ 5.18 per Watt
> [...] Residential solar is about to get more expensive as the 30% investment tax credit expires at the end of 2025.ETH Zurich and EPFL to release a LLM developed on public infrastructure
Use case for science and code LLMs: Superhydrodynamic gravity (SQR / SQG, )
LLMs do seem to favor general relativity but probably would've favored classical mechanics at the time given the training corpora.
Not-yet unified: Quantum gravity, QFT, "A unified model must: " https://news.ycombinator.com/item?id=44289148
Will be interested to see how this model responds to currently unresolvable issues in physics. Is it an open or a closed world mentality and/or a conditioned disclaimer which encourages progress?
What are the current benchmarks?
From https://news.ycombinator.com/item?id=42899805 re: "Large Language Models for Mathematicians" (2023) :
> Benchmarks for math and physics LLMs: FrontierMath, TheoremQA, Multi SWE-bench: https://news.ycombinator.com/item?id=42097683
Multi-SWE-bench: A Multi-Lingual and Multi-Modal GitHub Issue Resolving Benchmark: https://multi-swe-bench.github.io/
Add'l LLM benchmarks and awesome lists: https://news.ycombinator.com/item?id=44485226
Microsoft has a new datacenter that you don't have to keep adding water to; which spares the aquifers.
How to use this LLM to solve energy and sustainability problems all LLMs exacerbate? Solutions for the Global Goals, hopefully
(Unbelievable that I need to justify this at -4!)
Is the performance or accuracy on this better on FrontierMath or Multi-SWE-bench, given the training in 1,000 languages?
I just read in the Colab release notes that models uploaded to HuggingFace can be opened on Colab with "Open in colab" on HuggingFace
It's the word "gravity" that triggers them.
First-principles diagrammatic Monte Carlo for electron–phonon and polaron
Is there a geometric analogue to this, too; like the amplituhedron?
/? amplituhedron
From "Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) https://journals.aps.org/pra/abstract/10.1103/PhysRevA.108.0... :
> Amplituhedrons are also "quantum-geometric" "manifolds" (?); and so no Feynman diagrams.
> [...]
> "Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2022) https://doi.org/10.21203/rs.3.rs-1572967/v1
From https://news.ycombinator.com/item?id=30782678 :
> - [ ] Maybe Twistor theory has insight into a classical geometrical formulation that could be run on a non-QC?
> Amplituhedron: https://en.wikipedia.org/wiki/Amplituhedron
Imaging of electrons and electron behavior: https://news.ycombinator.com/item?id=43186410 :
> "Optical widefield nuclear magnetic resonance microscopy" (2025) https://www.nature.com/articles/s41467-024-55003-5 ; phase and intensity for each pixel/voxel
/? electrons
- https://news.ycombinator.com/item?id=41765192
- https://news.ycombinator.com/item?id=42082690 ; electron hydrodynamics in graphene at room temperature
- https://news.ycombinator.com/item?id=41775612 ; Re: sonoluminesence, Earthquake light, piezoelectricity and quartz and gold and surface plasmon polaritons
Postgres LISTEN/NOTIFY does not scale
Re: Postgres LISTEN/NOTIFY and PgQueuer, which is built on LISTEN/NOTIFY: https://news.ycombinator.com/item?id=41284703#41285614
Bash with Debugger and Improved Debug Support and Error Handling
There are libraries for unit testing shell scripts.
And also,
From https://news.ycombinator.com/item?id=26906351 :
> From "Bash Error Handling" https://news.ycombinator.com/item?id=24745833 : you can display the line number in `set -x` output by setting $PS4:
export PS4='+(${BASH_SOURCE}:${LINENO}) '
set -x
And trap '(read -p "[$BASH_SOURCE:$LINENO] $BASH_COMMAND?")' DEBUGFrom MIT, an instruction manual for turning research into startups
"The R&D Venture Studio Playbook" https://rdventurestudio.com/
SVGs that feel like GIFs
Tools for making animated SVGs from terminal recordings:
asciinema2svg: https://github.com/thenets/asciinema2svg
termsvg: https://github.com/MrMarble/termsvg
/? terminal svg: https://hn.algolia.com/?q=terminal+svg
/? svg animation: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Backlog.md – Markdown‑native Task Manager and Kanban visualizer for any Git repo
I threw Claude Code at an existing codebase a few months back and quickly quit— untangling its output was slower than writing from scratch. The fix turned out to be process, not model horsepower.
Iteration timeline
==================
• 50 % task success - added README.md + CLAUDE.md so the model knew the project.
• 75 % - wrote one markdown file per task; Codex plans, Claude codes.
• 95 %+ - built Backlog.md, a CLI that turns a high-level spec into those task files automatically (yes, using Claude/Codex to build the tool).
Three step loop that works for me 1. Generate tasks - Codex / Claude Opus → self-review.
2. Generate plan - same agent, “plan” mode → tweak if needed.
3. Implement - Claude Sonnet / Codex → review & merge.
For simple features I can even run this from my phone: ChatGPT app (Codex) → GitHub app → ChatGPT app → GitHub merge.
Repo: https://github.com/MrLesk/Backlog.md
Would love feedback and happy to answer questions!
Really love this.
Would love to see an actual end to end example video of you creating, planning, and implementing a task using your preferred models and apps.
Will definitely do. I am also planning to run a benchmark with various models to see which one is more effective at building a full product starting from a PRD and using backlog for managing tasks
Is there an established benchmark for building a full product?
- SWE-bench leaderboard: https://www.swebench.com/
- Which metrics for e.g. "SWE-Lancer: a benchmark of freelance software engineering tasks from Upwork"? https://news.ycombinator.com/item?id=43101314
- MetaGPT, MGX: https://github.com/FoundationAgents/MetaGPT :
> Software Company as Multi-Agent System
> MetaGPT takes a one line requirement as input and outputs user stories / competitive analysis / requirements / data structures / APIs / documents, etc. Internally, MetaGPT includes product managers / architects / project managers / engineers. It provides the entire process of a software company along with carefully orchestrated SOPs.
- Mutation-Guided LLM-based Test Generation: https://news.ycombinator.com/item?id=42953885
- https://news.ycombinator.com/item?id=41333249 :
- codefuse-ai/Awesome-Code-LLM > Analysis of AI-Generated Code, Benchmarks: https://github.com/codefuse-ai/Awesome-Code-LLM :
> 8.2 Benchmarks: Integrated Benchmarks, Evaluation Metrics, Program Synthesis, Visually Grounded Program, Synthesis, Code Reasoning and QA, Text-to-SQL, Code Translation, Program Repair, Code Summarization, Defect/Vulnerability Detection, Code Retrieval, Type Inference, Commit Message Generation, Repo-Level Coding
- underlines/awesome-ml/tools.md > Benchmarking: https://github.com/underlines/awesome-ml/blob/master/llm-too...
- formal methods workflows, coverage-guided fuzzing: https://news.ycombinator.com/item?id=40884466
- "Large Language Models Based Fuzzing Techniques: A Survey" (2024) https://arxiv.org/abs/2402.00350
You have compiled an interesting list of benchmarks and adjacent research. The implicit question is whether an established benchmark for building a full product exists.
After reviewing all this, what is your actual conclusion, or are you asking? Is the takeaway that a comprehensive benchmark exists and we should be using it, or is the takeaway that the problem space is too multifaceted for any single benchmark to be meaningful?
The market - actual customers - is probably the best benchmark for a product.
But then outstanding liabilities due to code quality and technical debt aren't costed in by the market.
There are already code quality metrics.
SAST and DAST tools can score or fix code, as part of a LLM-driven development loop.
Formal verification is maybe the best code quality metric.
Is there more than Product-Market fit and infosec liabilities?
Scientists identify culprit behind biggest-ever U.S. honey bee die-off
How to plant a pollinator garden?
How to counter parasitic mites? Aren't there new LLM applications for chemicals discovery?
> According to a preprint posted to the bioRxiv server this month, nearly all the dead colonies tested positive for bee viruses spread by parasitic mites. Alarmingly, every single one of the mites the researchers screened was resistant to amitraz, the only viable mite-specific pesticide — or miticide — of its kind left in humans’ arsenal
"Viruses and vectors tied to honey bee colony losses" (2025) https://www.biorxiv.org/content/10.1101/2025.05.28.656706v1....
> How to counter parasitic mites? Aren't there new LLM applications for chemicals discovery?
hard to imagine that additional hubris will solve problems created by hubris
"Chemical knowledge and reasoning of large language models vs. chemist expertise" (2025) https://news.ycombinator.com/item?id=44275471
From (2025) https://www.statnews.com/pharmalot/2025/04/11/fda-animals-do... :
> [FDA] will encourage researchers to use computer modeling and artificial intelligence to predict how a drug will perform, as well as organs-on-a-chip, which are miniaturized devices that mimic organs and tissues. And to determine effectiveness, the FDA will begin using existing, real-world safety data from other countries where a drug has already been studied in humans.
Also from 2025: "FDA to Use A.I. In Drug Approvals to 'Radically Increase Efficiency'" https://news.ycombinator.com/item?id=44252183
(Edit)
From FDA > "Artificial Intelligence for Drug Development" https://www.fda.gov/about-fda/center-drug-evaluation-and-res... :
> FDA published a draft guidance in 2025 titled, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision Making for Drug and Biological Products.”
Native pollinators don't give a shit about mites. Don't spray herbicide and nature will do the rest.
What in nature eats the mites that are killing the bees?
Nothing needs to eat them, they just need to be manageable for the bee pops. The way that native colonies work, it just doesn't matter. The colonies size is essentially never greater than a few and often don't form colonies at all, so mites don't have really any good transmission vector.
Have you read the article?
Do you believe that ecology will just resolve bee colony collapse due to mites?
From the article:
> USDA research points to viruses spread by pesticide-resistant mites, indicating a worrying trend
If nothing eats or kills the mites that are killing the bees, should we expect bee colony collapse to resolve on its own?
Cell Towers Can Double as Cheap Radar Systems for Ports and Harbors (2014)
Also flood forecasting
https://www.smh.com.au/national/nsw/world-first-5g-spy-will-...
Flood sensing with 5G?
> [...] New South Wales State Emergency Service (NSW SES) and the NSW Government, University of Technology Sydney (UTS) researchers working with industry partner TPG Telecom [...]
> “We want to tell people exactly how high [the flood] is. We’re now down to accuracy of 0.1 metres.”
> [...] “Currently, residents will receive the warning that the water is going to come, and they’ve got to get their cattle to higher ground. But how high is high?” she said.
Is Running Bad for Your Knees? Research Says, "No" (2023)
Is there any distinction made between trail and road runs?
I don't know how you can claim that running on hard pavement is good for the body, with comparable cardio as a control.
I strongly doubt that this control model is correct if the recommended adjustment is to discard the association between e.g. patellar tendinitis and patellofemoral pain syndrome and running surface hardness. Still, the guidelines should be to avoid road runs because they certainly exacerbate symptoms of such injuries or disorders.
Running on pavement, or whatever hard surface can be just fine, depending on the mechanics of the step landing.
Also I imagine that trail running might raise potential to land awkwardly because the ground is all sorts of uneven. Pavement on the other hand is pretty predictable.
They each have their charms, I suppose.
I don't think I've ever heard anyone say that they prefer running on asphalt or concrete over a nice soft spongy track.
I don't know. I've never run on a properly surfaced track. The one my high school had was essentially just loose limestone, at the time. Though that was ~25 years ago. I think it was upgraded since then. Besides, at that time, I was about capable of running half a lap. High school me was NOT athletic.
As for now. I'm kind of 50/50 or maybe 60/40 in favour of a regular old road vs trails. I much prefer a road to something like a beach. Having to compensate for the constantly giving, lumpy ground that is sand, is kinda meh. I also seem to remember a study that found it was actually worse for injury risk, to run on sand.
Landing on the ball of the foot means your leg isn't hyper extended, which means the impact is diffused since the knee and hip aren't completely straight at the moment of the strike. The whole hyper extended leg and heel-strike thing seems to be a consequence of raised-heel shoes. See my other post on this thread. Particularly the Harvard link.
Without shoes the first impact is the ball, followed by the toes, then the heel, last (or sometimes not at all). The impact on the joints is actually lower this way than heel-striking with a raised heel shoe. You can kinda just try this, even by jumping in place and landing on your heel with a shoe, vs landing on the balls of your feet.
We were fortunate tohave a nice soft track for our track and field in high school.
In middle school, on an asphalt track with painted lines, I took out a hurdle and a chunk of my hip learning to run hurdles.
Sometimes people fall where they're running.
The roughness of a soft track also scrapes but it's definitely nicer than road runs IMHO.
I really doubt that we're evolutionarily tuned for running on hard surfaces.
So, the control model for this study
For how many generations have humans had what kinds of shoes
bootc-image-builder: Build your entire OS from a Containerfile
Does bootc-image-builder build Native Containers?
Do Native Containers work as VM images that can be stored in an OCI Image/Artifact/Package Registry?
I've been mentioning Native Containers since I realized that was how bazzite works now.
Is vagrant necessary anymore if host, vm, and container images can all be signed and stored in an OCI Image store?
From https://news.ycombinator.com/item?id=44137501 re: Firecracker and Microsandbox VMs :
> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/
ublue-os/image-template: https://github.com/ublue-os/image-template :
> Build your own custom Universal Blue Image
ublue-os/akmods has nvidia GPU drivers, nvidia-open, zfs: https://github.com/ublue-os/akmods :
> A caching layer for pre-built Fedora akmod RPMs
> OCI images providing a set of cached kernel RPMs and extra kernel modules to Universal Blue images. Used for better hardware support and consistent build process.
nvidia-container-toolkit (CDI) is necessary for --gpus=all to do CUDA and libEGL 3D with podman. Is this also already installed in bazzite?
ublue-os/toolboxes: "quadlets and systemd service units for management", boxkit : https://github.com/ublue-os/toolboxes#images
ublue-os/devcontainer .devcontainer/devcontainer.json: https://github.com/ublue-os/devcontainer/blob/main/src/base/...
It looks like the Just Justfile 40-nvidia.just has moved due to image topology simplification? https://news.ycombinator.com/item?id=39364975 :
> ublue-os/config//build/ublue-os-just/40-nvidia.just defines the `ujust configure-nvidia` and `ujust toggle-nvk` commands
What does "native containers" mean in this context?
> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation
From https://coreos.github.io/rpm-ostree/container/#ostree-native... :
> rpm-ostree inherits work in ostree-rs-ext to create “container native ostree” functionality. This elevates OCI/docker containers to be natively supported as a transport mechanism for bootable operating systems.
I think it means simplification of complexity and unnecessary re-duplication.
Personal care products disrupt the human oxidation field
Not well versed in the field, what are the basic implications of this for health?
In the 1970s there was a lot of talk about ‘healthful negative ions’ and a fad for negative ion generators even though many of those also generated hazardous ozone.
Hydroxyl ions are a significant kind of negative ion in the atmosphere and they’re known to be good because they react with and clean out pollutants like methane
https://en.wikipedia.org/wiki/Hydroxyl_radical
https://earthobservatory.nasa.gov/images/144358/detergent-li...
FWIU hydrogen plasma in water for hydrolysis would produce OH Hydroxl radicals. (and H2O2, O3 (Ozone), and NO_x).
TIL that Hydroxyl ions bind to methane and thereby clean the air?
Air ioniser: https://en.wikipedia.org/wiki/Air_ioniser :
> A 2018 review found that negative air ions are highly effective in removing particulate matter from air. [6]
But the Ozone. Ozone sanitizes and freshens, but is bad for the lungs at high concentrations.
Howdy – Windows Hello style facial authentication for Linux
I know there was extensive testing when face recognition authentication came to smartphones. I wonder how an open source project like this one compares. I suspect there are substantially more false positives/negatives than on a commercially developed version that needs to support everyone to be successful.
Apple's Face ID uses what is essentially a 3D camera, a simple 2D color camera cannot compare to that in terms of accuracy.
AFAIK Pixel phones, including the Pixel 9, only use 2D images for face unlock. So it's definitely possible to reach mainstream quality with conventional cameras.
(Unless you'd argue that the face unlock found on Pixels is not passable either)
I don't know how Google does it, but it's possible to extract 3d information from a 2d sensor. You either need a variable focus or phase detection in the sensor.
It is possible to infer phase from second order intensity via the Huygens-Steiner theorem for rigid body rotation, FWIU: https://news.ycombinator.com/item?id=42663342 .. https://news.ycombinator.com/item?id=37226121#37226160
Doesn't that mean that any camera can be used to infer phase (and thus depth for face ID, which is a high risk application)?
> variable focus
A light field camera (with "infinite" focus) would also work.
Very cool. Yes, probably? I'll have to think about the relationship between image quality and the fidelity of the derived phase measurement, because it's not obvious how good a camera needs to be to be "good enough" for a secure system.
Light field? I remember Lytro! Such cool technology that never found its niche. https://en.wikipedia.org/wiki/Lytro
Is anybody making a successor product?
I guess the task is to design an experiment to test the error between phase inferred from intensity in a digital camera by Huygens-Steiner and a barycentric coordinate map And far more expensive photonic phase sensors.
Is (framerate-1 Hz) a limit, due to the discrete derivative being null for the first n points?
Fortunately this article explained the implications of said breakthrough; "Physicists use a 350-year-old theorem [Huygens-Steiner] to reveal new properties of light waves" https://phys.org/news/2023-08-physicists-year-old-theorem-re... :
> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity.
IDK what happened with wave field cameras like the Lytro. They're possibly useful for face ID, too?
"SAR wavefield". There's a thing.
From https://news.ycombinator.com/item?id=32819838 :
> Wave Field recordings are probably [would probably be] the most complete known descriptions of the brain and its nonlinear fields?
Writing a basic Linux device driver when you know nothing about Linux drivers
NASA's Voyager Found a 30k-50k Kelvin "Wall" at the Edge of Solar System
Is it that there is not enough mass beyond the 30k-50k Kelvin wall at the edge of the solar system to attract away things with mass that can carry thermal energy; that thermal mass clumps in the well around the edges and only wisps away, or is that a sidewall boundary of a black hole?
Where is Planet X in relation to said wall of energy density?
Said wall is only sampled by the Voyager probes with a few exit trajectories?
Does said thermal wall extend all the way around the solar system, or is it mostly on one side of the sun; is it a directional coronal wake? Is there symmetry in said thermal wall around the trajectory of the sun?
Is this better explained with SQR Superfluid Quantum Relativity?
Are there other phases of matter at those temperatures?
From the article:
> "As the heliosphere plows through interstellar space, a bow shock forms, similar to what forms as a ship plowing through the ocean
So fluidic space wind and fluidic nonlinear bow shock wakes.
Are there additional heat walls beyond (and probably also before) the first, as there are with more laminar boat wakes?
Is there a gravitational wave "bow shock", too?
"The Heliosphere" https://www.nasa.gov/image-article/heliosphere-4/
From "Two galaxies aligned in a way where their gravity acts as a compound lens" https://news.ycombinator.com/item?id=42159195 :
> "The helical model - our solar system is a vortex" https://youtube.com/watch?v=0jHsq36_NTU
Where are planet X and the heat wall (and/or side wall) in this vortical model of the solar system?
Heliosphere > Heliopause: https://en.wikipedia.org/wiki/Heliosphere#Heliopause
The heliopause is due to a balance of pressure between the Ram pressure of the solar wind, and the Total pressure of the interstellar medium.
The "pressure" of such fluidic solar and interstellar "wind" is due to n-body gravity or the shape of spacetime.
The R&D Venture Studio Playbook
NewsArticle: "From MIT, an instruction manual for turning research into startups: MIT Proto Ventures publishes venture studio playbook to catalyze innovation at research institutions." (2025) https://news.mit.edu/2025/from-mit-instruction-for-manual-tu... :
Google DeepMind team up to solve the Navier-Stokes million-dollar problem
Notes for such efforts:
From https://news.ycombinator.com/item?id=44043518#44053779 re: deep learning poised:
> jax-cfd mentions phiflow
> PhiFlow: https://github.com/tum-pbs/PhiFlow/
>> A differentiable PDE solving framework for machine learning
SymPy can solve ODEs and some PDEs.
sympy.solvers.pde: https://docs.sympy.org/latest/modules/solvers/pde.html
SymPy's sympy.utilities.lambdify.lambdify() compiles things to faster solvers like CPython math module, mpmath, NumPy, SciPy, CuPy, JAX, TensorFlow, SymPy, numexpr, and PyTorch. https://docs.sympy.org/latest/modules/utilities/lambdify.htm...
dynamicslab/pysindy; https://github.com/dynamicslab/pysindy :
> A package for the sparse identification of nonlinear dynamical systems from data
A question about fundamental Anosov flows and CFD in pysindy; due to "Flow Proof Helps Mathematicians Find Stability in Chaos" (2023) https://www.quantamagazine.org/flow-proof-helps-mathematicia... .. https://github.com/dynamicslab/pysindy/issues/383 :
/? site:github.com anosov https://www.google.com/search?q=site%3Agithub.com+anosov
> GitHub topic: quantum-fluids: https://github.com/topics/quantum-fluids
GitHub topic: Gross-Pitaevskii: https://github.com/topics/gross-pitaevskii
OSIRIS-code can simulate laser emissions in plasma, nonlinear optics in plasma,; and supports Checkpointing and thus probably parallelization; https://news.ycombinator.com/context?id=44371059
For simulations of gravity-assisted spacecraft trajectories, in n-body (vortical fluidic) gravity:
> JPL SPICE toolkit: https://naif.jpl.nasa.gov/naif/toolkit.html
> SpiceyPy: https://github.com/AndrewAnnex/SpiceyPy
"Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2017) https://hal.science/hal-01248015/ :
> [ Bernoulli, Navier-Stokes, Gross-Pitaevskii vortices in a field with curl ]
Shouldn't solving NS also solve for n-body gravity?
Anosov diffeomorphism; hyperbolicity of complex nonlinear dynamic fluid systems, Lyapunov exponents : https://en.wikipedia.org/wiki/Anosov_diffeomorphism
Curl: https://en.wikipedia.org/wiki/Curl_(mathematics)
Vorticity: https://en.wikipedia.org/wiki/Vorticity
Bernoulli's principle: https://en.wikipedia.org/wiki/Bernoulli%27s_principle
Gross-Pitaevskii equation: https://en.wikipedia.org/wiki/Gross%E2%80%93Pitaevskii_equat...
Navier-Stokes equations: https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equation...
Um, no?
This is a fine collection of links - much to learn! - but the connection between flow and gravitation is (in my understanding) limited to both being Green's function solutions of a Poisson problem. https://en.wikipedia.org/wiki/Green%27s_function
There are n-body methods for both (gravitation and Lagrangian vortex particle methods), and I find the similarities and differences of those algorithms quite interesting.
But the Fedi paper misses that key connection: they're simply describing a source/sink in potential flow, not some newly discovered link.
Quantum spin Hall effect in magnetic graphene
ScholarlyArticle: "Quantum spin Hall effect in magnetic graphene" (2025) https://www.nature.com/articles/s41467-025-60377-1
NewsArticle: "Quantum spin currents in graphene without external magnetic fields pave way for ultra-thin spintronics" https://phys.org/news/2025-06-quantum-currents-graphene-exte...
> Abstract: [...] These spin-polarized gapless edge states can form within the bulk gap of graphene, attainable by inducing staggered potentials, spin-orbit coupling (SOC) and/or magnetic exchange interactions [17,18,19]. Depending on the respective magnitude of the spin-orbit vs. exchange interactions, these edge states can be chiral or helical [20], allowing for topologically protected spin transport that is expected to be robust against disorder [18].
Are they anyons?
> Remarkably, we experimentally realize the presence of helical states at zero external magnetic field, indicating the emergence of the QSH effect despite the breaking of time-reversal symmetry by the induced magnetism [20,37]. The unprecedented zero-magnetic-field detection of the QSH state in this graphene-based magnetic heterostructure, coexisting with the AH effect, makes this system intriguing for the development of quantum spintronic circuitries.
"Moiré-driven topological electronic crystals in twisted graphene" (2025) https://www.nature.com/articles/s41586-024-08239-6 .. https://news.ycombinator.com/item?id=42879133
Re: Brandon's Circuit Simulator: https://news.ycombinator.com/item?id=43955906
"Generating and detecting graphene plasmon polaritons with terahertz electronics" https://news.ycombinator.com/item?id=41206079
"Terahertz spectroscopy of collective charge density wave dynamics at the atomic scale" (2024) https://www.nature.com/articles/s41567-024-02552-7 .. "Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054
These have no magnetic field? Which type of superconductor is that similar to?
From https://news.ycombinator.com/item?id=41803662 :
> Type-III superconductivity is destroyed not by Cooper pair breaking but by vortex proliferation generalizing the Berezinskii-Kosterlitz-Thouless mechanism to any dimension.
Is hydrogen plasma to remove oxide from graphene oxide wafers a feasible, sustainable alternative to photoresist?
Re: graphene oxide https://news.ycombinator.com/item?id=43955611 :
> What types of graphene and other forms of carbon do not conduct electricity, are biodegradable , and would be usable as a graphene PCB for semiconductors and superconductors?
> Graphene Oxide (low cost of production), Graphane (hydrogen; high cost of production), Diamond (lowering cost of production, also useful for NV QC nitrogen-vacancy quantum computing; probably in part due to the resistivity of the molecular lattice),
> How could graphene oxide PCBs be made fire-proof?
Salt; like insulation batting. Maybe even processed brine?
Gemini Robotics On-Device brings AI to local robotic devices
The MuJoCo link actually points to https://github.com/google-deepmind/aloha_sim
mujoco_menagerie has Mujoco MJCF XML models of various robots.
google-deepmind/mujoco_menagerie: https://github.com/google-deepmind/mujoco_menagerie
mujoco_menagerie/aloha: https://github.com/google-deepmind/mujoco_menagerie/tree/mai...
Scientists Generated "Impossible" Photons Directly from Quantum Vacuum
NewsArticle: "Oxford physicists recreate extreme quantum vacuum effects" (2025) https://www.physics.ox.ac.uk/news/oxford-physicists-recreate...
NewsArticle: "Photons collide in the void: Quantum simulation creates light out of nothing" (2025) https://www.sciencedaily.com/releases/2025/06/250608072527.h...
ScholarlyArticle: "Computational modelling of the semi-classical quantum vacuum in 3D" (2025) https://www.nature.com/articles/s42005-025-02128-8
OSIRIS docs: https://osiris-code.github.io/documentation/
OSIRIS docs > Reference guide > Input file structure: https://osiris-code.github.io/osiris/reference/
OSIRIS docs > Checkpointing: https://osiris-code.github.io/documentation/checkpointing
OSIRIS Consortium: https://osiris-code.github.io/consortium/ ; like OpenMPI:
> The goal of the open-source OSIRIS project is to provide a large user base with a flexible software framework to study a variety of physics problems within high energy density plasma science, plasma-based acceleration, space and astrophysics, the nonlinear optics of plasmas, QED and other exotic plasmas, and high-intensity laser and beam plasma interactions.
OSIRIS PIC source: https://github.com/osiris-code/osiris/tree/main/source (Fortran)
Do such extreme quantum vacuum effects affect the 100 Gbit/s of random at that point in quantum foam spacetime? [1]
1. "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330 .. https://news.ycombinator.com/item?id=43497414 :
>> google/paranoid_crypto.lib.randomness_tests
Microplastics shed by food packaging are contaminating our food, study finds
This has to be judged against the alternative, which is… I’m not sure in many cases. As just one example, think about how much more of a pain it is to package/ store/ transport/ consume milk in bottles compared to plastic. Of course there’s also paperboard — I think (I Am Not A Packaging Expert) milk is actually easier to handle with noon-plastic than many other foods. Consider what it would mean to avoid plastic for selling meat I think that means going back to individually prepared paper packages, which would be much more expensive.
This is not to say it might not be worth it in some cases, just that it is a trade-off, and plastic is remarkably good at what it does.
Ask HN: Hydrogen plasma to deoxidize Aluminum for sustainable green hydrogen?
Aluminum is plentiful.
Aluminum hydrolyzes water into Hydrogen and Oxygen; but an oxide layer forms and that limits the yield.
Hydrogen plasma removes oxygen from Aluminum ( titanium, maybe graphene oxide wafers, and from).
Hydrogen plasma underwater creates various reactive Hydrogen molecules that further purify water.
Aluminum treated with hydrogen plasma quickly reoxidizes if not immersed in water.
Is hydrogen plasma + aluminum a good solution for green hydrogen?
I'm confused. Why do you need aluminum to turn hydrogen plasma into hydrogen? Couldn't you wait for the plasma to cool down?
Wait, where are you getting hydrogen plasma? The sun?
(H2O + Al + (H) + Electricity) => H, O2, HO, Al
Couldn't the system be bootstrapped by scratching up an aluminum can, pouring in water, and turning a wheel with a handle or wind or water or gravity or so on?
And then some (?) of the produced hydrogen would be used for hydrogen plasma to refinish the aluminum
The big question is whether such a process would give you a net gain or a net loss of hydrogen. You're going to lose hydrogen in the plasma process. Do you lose more from the refinishing than you gain from the rest of the process?
Whether the net yield of hydrogen (and nearly-sanitized H2O and Aluminum oxide slurry) is worth the squeeze.
And, Is hydrogen plasma through and within the water worth it from a just hydrogen yield perspective?
When I asked an AI about research in this - after an LLM suggested hydrogen plasma for deoxidizing titanium (instead of yttrium) the other day - there were a few results; which alone doesn't indicate viability.
What do you do with the aluminum slurry from hydrogen plasma etching and IDK ultrasound?
Ctrl-F aluminum:
"Aluminum formate Al(HCOO)3: Earth-abundant, scalable, & material for CO2 capture" https://news.ycombinator.com/item?id=33501189 .. https://westurner.github.io/hnlog/#story-33501182
"Superconducting nanostrip single photon detectors made of aluminum thin-films" https://westurner.github.io/hnlog/#story-42647229
"Green steel from red mud through climate-neutral hydrogen plasma reduction" https://www.nature.com/articles/s41586-023-06901-z ..
> Red mud [from aluminum production] consists of up to 60% iron oxide. Melting the mud in an electric arc furnace using a plasma containing 10% hydrogen reduces it to liquid iron and liquid oxides, allowing the iron to be easily extracted. The plasma reduction technique takes 10 minutes and produces iron so pure, say the researchers, it can be processed directly into steel. And the no-longer-corrosive metal oxides solidify on cooling, so they can be transformed into glass-like material that could be used as a filling material in the construction industry.
...
"New stainless steel pulls green hydrogen directly out of seawater" https://news.ycombinator.com/item?id=43991630
Three-Dimensional Time: A Mathematical Framework for Fundamental Physics
How do the temporal symmetries cited in this article [1] compare to the time reflections observed in a metamaterial in [2], other models of retrocausality (faster than c causal relations), and time-polarized photons in Minkowski 4-space?
1. "Three-Dimensional Time: A Mathematical Framework for Fundamental Physics" (2025) https://www.worldscientific.com/doi/10.1142/S242494242550004...
2. "Observation of temporal reflection and broadband frequency translation at photonic time interfaces" (2023) https://www.nature.com/articles/s41567-023-01975-y
How Are Transistors Assembled Inside a CPU? [video]
Any recommendations for a similar video about how transistors are produced with nanometer NIL nanoimprint lithography?
In the description they say a video on lithography is on the way. In the mean time maybe this one from the same channel: https://m.youtube.com/watch?v=dX9CGRZwD-w
Great channel
They do have great 3D videos. E.g. "What are PCBs? || How do PCBs Work?" https://youtube.com/watch?v=Z2LgmIGE2nI&
NIL Nanoimprint Lithography does not use EUV laser light.
(There's also solid-state DUV lithography now.)
/?yt nil nanoimprint lithography: https://www.youtube.com/results?search_query=nil+nanoimprint...
Show HN: Turn a paper's DOI into its full reference list (BibTeX/RIS, etc.)
How does DOI interact with blockchain? I did a quick Google search and didn’t find much (lots of mismatches against “DAO”). Does DOI need blockchain for any legit reasons, like provenance?
I’m no blockchain evangelist in its current state of “value” but this seems like a great test case for resolving the academic or legitimate origin of published material.
DOI has nothing to do with blockchain. There's no great looming issue with resolving the legitimate origin of published material. There's no provenance problem to solve. There's a registration problem, that has been solved, and for which blockchains are a terrible fit.
DOIs could be stored for lookup in a blockchain. Isn't there currently a centralized single point of failure in DOI and ORCID resolution?
Users would generate and centrally register or receive a generated W3C DID keypair with which to sign their ScholarlyArticles and peer review CreativeWorks.
W3D DID Decentralized Identifiers solve for what DOI and ORCID solve for without requiring a central registry.
W3C PROV is for describing provenance. PROV RDF can be signed with a DID sk.
PDFs can be signed with their own digital signature scheme, but there's no good way to publish Linked Data in a PDF (prepared as a LaTeX manuscript for example).
Bibliographic and experimental control metadata is only so useful in assuring provenance and authenticity of article and data and peer reviews which legitimize.
From https://news.ycombinator.com/item?id=28382186 :
>> JOSS (Journal of Open Source Software) has managed to get articles indexed by Google Scholar [rescience_gscholar]. They publish their costs [joss_costs]: $275 Crossref membership, DOIs: $1/paper:
Captain Cook's missing ship found after sinking 250 years ago
I'm sure the rename had good reason but I can't imagine going from a name like "HMS Endeavour", what a great name, to "Lord Sandwich" ... in modern times that sounds like some lighthearted forum username.
The HMS Lord Sandwich's namesake is almost certainly former 1st Lord of the Admiralty John Montagu, the 4th Earl of Sandwich && person that the dish is actually named after.
He was also head of the British navy ("First Lord of the Admiralty") at the time and a great supporter of Cook's, so there's even a closer connection specific to the Endeavor. Cook named Hawaii the "Sandwich Islands" after him.
https://en.wikipedia.org/wiki/John_Montagu,_4th_Earl_of_Sand...
What were the existing names of the islands?
Chromium Switching from Ninja to Siso
Kinda impressive and terrifying that Chromium needs its own build system. Kinda strange that Bazel was right there, also from Google, and they not only choose not to use it, but also reference it in the name of the new tool.
Show HN: Unregistry – “docker push” directly to servers without a registry
I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.
In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.
So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.
docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.
Would love to hear your thoughts and use cases!
[1]: https://github.com/psviderski/unregistry
[2]: https://github.com/psviderski/uncloud
Functionality-wise this is a lot like docker-pushmi-pullyu[1] (which I wrote), except docker-pushmi-pullyu is a single relatively-simple shell script, and uses the official registry image[2] rather than a custom server implementation.
@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?
Do docker-pussh or docker-pushmi-pullyu verify container image signatures and attestations?
From "About Docker Content Trust (DCT)" https://docs.docker.com/engine/security/trust/ :
> Image consumers can enable DCT to ensure that images they use were signed. If a consumer enables DCT, they can only pull, run, or build with trusted images.
export DOCKER_CONTENT_TRUST=1
cosign > verifying containers > verify attestation: https://docs.sigstore.dev/cosign/verifying/verify/#verify-at.../? difference between docker content trust dct and cosign: https://www.google.com/search?q=difference+between+docker+co...
docker-pushmi-pullyu does a vanilla `docker pull`[1] on the remote side, so you should be able to set `DOCKER_CONTENT_TRUST` in the remote environment to get whatever behavior you want (though admittedly I have not tested this).
If there's desire for an option to specify `--disable-content-trust` during push and/or pull I'll happily add it. Please file an issue if this is something you want.
[1]: https://github.com/mkantor/docker-pushmi-pullyu/blob/12d2893...
Should it be set in both the local and remote envs?
What does it do if there's no signature?
Do images built and signed with podman and cosign work with docker; are the artifact signatures portable across container CLIs docker, nerdctl, and podman?
From nerdctl/docs/cosign.md "Container Image Sign and Verify with cosign tool" https://github.com/containerd/nerdctl/blob/main/docs/cosign.... ; handily answering my own question aloud:
Sign the container image while pushing, verify the signature on fetch/pull:
# Sign the image with Keyless mode
$ nerdctl push --sign=cosign devopps/hello-world
# Sign the image and store the signature in the registry
$ nerdctl push --sign=cosign --cosign-key cosign.key devopps/hello-world
# Verify the image with Keyless mode
$ nerdctl pull --verify=cosign --certificate-identity=name@example.com --certificate-oidc-issuer=https://accounts.example.com devopps/hello-world
# You can not verify the image if it is not signed
$ nerdctl pull --verify=cosign --cosign-key cosign.pub devopps/hello-world-badVerified dynamic programming with Σ-types in Lean
I've been meaning to learn Lean and fascinated with the concept but syntax like:
let rec helperMemo : Nat → HashMap Nat Nat → Nat × HashMap Nat Nat
is a big turnoff to me. I find it annoying to parse mentally. I can do it but I have to concentrate or it's easy to gloss over an important detail.Does aliasing the types work?
def MemoMap := HashMap Nat Nat
def MemoResult := Nat × MemoMap
let rec helperMemo : Nat → MemoMap → MemoResultRecord types would likely help a lot also.
Tupples don't really indicate what I can expect from the members.
Solar Technology Produces Clean Hydrogen from Plastic Waste
"Polymeric stabilization at the gas–liquid interface for durable solar hydrogen production from plastic waste" (2025) https://www.nature.com/articles/s41565-025-01957-6
Homomorphically Encrypting CRDTs
As the article mentions, fully homomorphic encryption is insanely slow and inefficient. But I have to say that it is a relatively new field (the first FHE scheme was discovered in 2009), and that the field has immensely progressed over the last decade and a half.
The first FHE scheme required keys of several TB/PB, bootstrapping (an operation that is pivotal in FHE schemes, when too many multiplications are computed) would take thousands of hours. We are now down to keys of "only" 30 MB, and bootstrapping in less than 0.1 second.
Hopefully progress will continue and FHE will become more practical.
Should students trust and run FHE encrypted WASM or JS grading code that contains the answers on their own Chromebooks; for example with JupyterLite and ottergrader?
On code signing and the SETI@home screensaver
Show HN: Trieve CLI – Terminal-based LLM agent loop with search tool for PDFs
Hi HN,
I built a CLI for uploading documents and querying them with an LLM agent that uses search tools rather than stuffing everything into the context window. I recorded a demo using the CrossFit 2025 rulebook that shows how this approach compares to traditional RAG and direct context injection[1].
The core insight is that LLMs running in loops with tool access are unreasonably effective at this kind of knowledge retrieval task[2]. Instead of hoping the right chunks make it into your context, the agent can iteratively search, refine queries, and reason about what it finds.
The CLI handles the full workflow:
```bash
trieve upload ./document.pdf
trieve ask "What are the key findings?"
```
You can customize the RAG behavior, check upload status, and the responses stream back with expandable source references. I really enjoy having this workflow available in the terminal and I'm curious if others find this paradigm as compelling as I do.
Considering adding more commands and customization options if there's interest. The tool is free for up to 1k document chunks.
Source code is on GitHub[3] and available via npm[4].
Would love any feedback on the approach or CLI design!
[1]: https://www.youtube.com/watch?v=SAV-esDsRUk [2]: https://news.ycombinator.com/item?id=43998472 [3]: https://github.com/devflowinc/trieve/blob/main/clients/cli/i... [4]: https://www.npmjs.com/package/trieve-cli
[flagged]
simonw/llm is a CLI for LLMs: https://github.com/simonw/llm
`llm --help`: https://llm.datasette.io/en/stable/help.html#llm-help
simonw/llm plugin directory: https://llm.datasette.io/en/stable/plugins/directory.html#pl...
From https://simonwillison.net/2024/Jun/17/cli-language-models/ :
> Every prompt and response run through the LLM tool is permanently logged to a SQLite database,
Noting that I linked to paperai/paperetl, paperqa2, paperqa-zotero, and The Oracle of Zotero (which have CLIs and LLM workflows for PDFs) and it was flagged. The content of such post is here, for the record: https://pastebin.com/CuJ8Zau0
Skin cells turned directly into neurons for cell therapy
Python's GIL Removal Reveals Second, Stronger GIL Behind It
Oh, the docs for colesbury/nogil AKA free threading:
CPython docs > Free Threading HOWTO > Python experimental support for free threading: https://docs.python.org/3/howto/free-threading-python.html#p...
CPython docs > Free Threading HOWTO > C API Extension Support for Free Threading: https://docs.python.org/3/howto/free-threading-extensions.ht...
Python Free-Threading Guide > Run your code on free-threaded Python: https://py-free-threading.github.io/#run-your-code-on-free-t...
ReadTheDocs used to have an "Edit" button that linked to the (GitHub,) page to edit the current page of the docs in source control.
Show HN: Canine – A Heroku alternative built on Kubernetes
Hello HN!
I've been working on Canine for about a year now. It started when I was sick of paying the overhead of using stuff like Heroku, Render, Fly, etc to host some web apps that I've built. At one point I was paying over $400 a month for hosting these in the cloud. Last year I moved all my stuff to Hetzner.
For a 4GB machine, the cost of various providers:
Heroku = $260 Fly.io = $65 Render = $85 Hetzner = $4
(This problem gets a lot worse when you need > 4GB)
The only downside of using Hetzner is that there isn’t a super straightforward way to do stuff like:
- DNS management / SSL certificate management - Team management - Github integration
But I figured it should be easy to quickly build something like Heroku for my Hetzner instance. Turns out it was a bit harder than expected, but after a year, I’ve made some good progress
The best part of Canine, is that it also makes it trivial to host any helm chart, which is available for basically any open source project, so everything from databases (e.g. Postgres, Redis), to random stuff like torrent tracking servers, VPN’s endpoints, etc.
Open source: https://github.com/czhu12/canine Cloud hosted version is: https://canine.sh
We maintain list of PaaS platform out there in the wild - https://github.com/debarshibasak/awesome-paas
dokku is a minimal PaaS that can also run on a VPS. There's a dokku-scheduler-kubernetes: https://github.com/dokku/dokku-scheduler-kubernetes
But it doesn't have support Helm charts.
Cloud computing architecture > Delivery links to SaaS, DaaS, DaaS, PaaS, IaaS: https://en.wikipedia.org/wiki/Cloud_computing_architecture
Cloud-computing comparison: https://en.wikipedia.org/wiki/Cloud-computing_comparison
Category:Cloud_platforms: https://en.wikipedia.org/wiki/Category:Cloud_platforms
awesome-selfhosted has a serverless / FaaS category that just links to awesome-sysadmin > PaaS: https://github.com/awesome-selfhosted/awesome-selfhosted#sof...
I’ve recently started an open-source self-hosted data platform (https://github.com/kot-behemoth/kitsunadata) with Dokku being a great initial deployment mode. It’s mature, simple to get started and has tons of docs / tutorials.
I collected a bunch of links while learning it, and launched https://github.com/kot-behemoth/awesome-dokku, as there wasn’t an “awesome” list.
Hope it helps someone!
https://dokku.com/docs/deployment/schedulers/k3s/
This is a more featureful version.
Fields where Native Americans farmed a thousand years ago discovered in Michigan
Tangentially related: I'm trying to make LiDAR data in Switzerland more accessible, see https://github.com/r-follador/delta-relief
There's some interesting examples in the Readme.
Does LIDAR work underwater?
FWIU in Grand Traverse Bay, Lake Michigan, there's a 9,000 year old stonehenge-like structure 40 feet underwater; that's 4000 thousand years older than Stonehenge and about 6000 years older than the Osireoin and the Pyramids.
/? Michigan underwater stonehenge: https://www.google.com/search?q=michigan+underwater+stonehen...
There's not even a name or a wikipedia page for the site? There are various presumed Clovis sites which are now underwater in TN, as well.
A lot of the pictures used in articles for this are pictures of something else (possible an old ship). Here's what it actually looks like: https://holleyarchaeology.com/index.php/the-truth-about-the-...
Calling it Stonehenge-like is a real stretch.
DARPA program sets distance record for power beaming
Is it possible to steer the weather by heating the atmosphere with power beaming microwaves?
Is it possible to cancel the vortical formation of a tornado or a hurricane with microwave power beam(s)?
Does heating the atmosphere with microwaves change the weather, or the jet stream, or the cloud cover?
What sort of a fluidic weather simulator could answer this question?
Is there a fluid simulation device that allows for precise wireless heating of certain points in the fluid?
If so, there could be international space law to study and control for the known and presumed risks of space-based microwave power beaming.
Nuking a hurricane would not break it because nukes are not big enough https://edition.cnn.com/2019/08/26/weather/hurricane-nuclear...
I think it's possible to nuke a tornado, but if someone does it to try to save a city, I expect even more destruction.
I don't expect the beam to be energetic enough to change anything. The closest method I can think of is https://en.wikipedia.org/wiki/Cloud_seeding but it doesn't distribute energy, just tiny crystals to condense oversaturated vapor.
Theoretically there is at least one point where a butterfly flapping its wings sufficiently affects seemingly nonlocal global weather.
Presumably it's easier to prevent formation of vortices far before visible formation.
A "cancel vortices" sim table might be more useful than the average CFD simulation.
Is gravity just entropy rising? Long-shot idea gets another look
As an experimental physicist, I refuse to get excited about a new theory until the proponent gets to an observable phenomenon that can fix the question.
The problem with emergent theories like this is that they _derive_ Newtonian gravity and General Relativity so it’s not clear there’s anything to test. If they are able to predict MOND without the need for an additional MOND field then they become falsifiable only insofar as MOND is.
Please, how is the article related to MOND's theories?
In general, they’re not. But if the only thing emergent theories predict is Newtonian dynamics and General Relativity then that’s a big problem for falsifiability. But if they modify Newtonian dynamics in some way, then do we have something to test.
From https://news.ycombinator.com/item?id=43738580 :
> FWIU this Superfluid Quantum Gravity [SQG, or SQR Superfluid Quantum Relativity] rejects dark matter and/or negative mass in favor of supervaucuous supervacuum, but I don't think it attempts to predict other phases and interactions like Dark fluid theory?
From https://news.ycombinator.com/item?id=43310933 re: second sound:
> - [ ] Models fluidic attractor systems
> - [ ] Models superfluids [BEC: Bose-Einstein Condensates]
> - [ ] Models n-body gravity in fluidic systems
> - [ ] Models retrocausality
From https://news.ycombinator.com/context?id=38061551 :
> A unified model must: differ from classical mechanics where observational results don't match classical predictions, describe superfluid 3Helium in a beaker, describe gravity in Bose-Einstein condensate superfluids , describe conductivity in superconductors and dielectrics, not introduce unoobserved "annihilation", explain how helicopters have lift, describe quantum locking, describe paths through fluids and gravity, predict n-body gravity experiments on earth in fluids with Bernoulli's and in space, [...]
> What else must a unified model of gravity and other forces predict with low error?
A.I. Is Poised to Rewrite History. Literally.
Cheap yet ultrapure titanium might enable widespread use in industry (2024)
> Unfortunately, producing ultrapure titanium is significantly more expensive than manufacturing steel (an iron alloy) and aluminum, owing to the substantial use of energy and resources in preparing high-purity titanium. Developing a cheap, easy way to prepare it—and facilitate product development for industry and common consumers—is the problem the researchers aimed to address.
"Direct production of low-oxygen-concentration titanium from molten titanium" (2024) https://www.nature.com/articles/s41467-024-49085-4
Any comments from someone in the metals industry? The paper shows this process being done at lab scale. It needs to be scaled up to steel mill size. How hard does that look?
What a useful question though. I hadn't realized that the cost of titanium is due to lack of a process for removing oxygen.
What is the most efficient and sustainable alternative to yttrium for removing oxygen from titanium?
process(TiO2, …) => Ti, …
From teh Gemini 2.5 Pro AI "expert", with human review:
> For primary titanium production (from ore): Molten Salt Electrolysis (Direct Electrochemical Deoxygenation, FFC Cambridge, OS processes, etc.) and calciothermic reduction in molten salts
> They aim to [sic.] revolutionize titanium production by moving away from the energy-intensive and environmentally impactful Kroll process, directly reducing TiO 2 and offering the potential for closed-loop systems.
> For recycling titanium scrap and deep deoxidation: Hydrogen plasma arc melting and calcium-based deoxidation techniques (especially electrochemical calcium generation) are highly promising. Hydrogen offers extreme cleanliness, while calcium offers potent deoxidizing power.
...
> Magnesium Hydride Reduction (e.g., University of Utah's reactor)
> Solid-State Reduction (e.g., Metalysis process)
Are there more efficient, sustainable methods of titanium production?
Also, TIL Ti is a catalyst for CNT carbon nanotube production; and, alloying CNTs with Ti leaves vacancies.
> From teh Gemini 2.5 Pro AI "expert", with human review:
You don't know enough about the subject to answer the question on your own, do you? So your "review" is really just cutting and pasting shit you also don't understand, which may or may not be true.
Thanks for your service.
Do you have a factual dispute with what I posted?
Are any of those alternatives hallucinations, in your opinion?
I feel uncompleted to further assist these.
> Do you have a factual dispute with what I posted?
I don't have nearly enough knowledge or experience in the subject to talk about the factual accuracy of what you posted. The whole point of my comment was, neither do you.
So, to your knowledge there is no factual inconsistency with what I have posted?
You have made an assumption that I didn't review the content that I prepared to post. You have alleged in ignorance and you have disrespectfully harassed without due process.
I did not waste your time with spammy unlabeled AI BS.
I have given my "review search results" time for free; and, in this case too, I have delivered value. You made this a waste of my time. You have caused me loss with such harassment. I have not caused you loss by posting such preliminary research (which checks out).
Did others in this thread identify and share alternative solutions for getting oxygen out of titanium? I believe it was fair to identify and share alternative solutions to the OT which I (re-) posted because this is an unsolved opportunity.
I believe it's fair and advisable to consult and clearly cite AI.
Why would people cite their use of AI? Isn't that what we want?
Which helped solved for the OT problem?
Given such behavior toward me in this forum, I should omit such insightful research (into "efficient and sustainable alternatives") to deny them such advantage.
This was interesting to me and worth spending my personal time on also because removing oxygen from graphene oxide wafers is also a billion dollar idea. Does "hydrogen plasma" solve for deoxidizing that too?
Cargo fuzz: a cargo subcommand for fuzzing with libFuzzer
The other day I noticed the fuzzing support in the zip crate.
How does cargo-fuzz compare to cargo-afl and the `cargo afl fuzz` command?
Rust Fuzz Book > Tutorial: https://rust-fuzz.github.io/book/afl/tutorial.html#start-fuz...
Show HN: Spark, An advanced 3D Gaussian Splatting renderer for Three.js
I'm the co-creator and maintainer of https://aframe.io/ and long time Web 3D graphics dev.
Super excited about new techniques to author / render / represent 3D. Spark is a an open source library to easily integrate Gaussian splats in your THREE.js scene I worked with some friends and I hope you find useful.
Looking forward to hearing what features / rendering techniques you would love to see next.
PartCAD can export CAD models to Three.js.
The OCP CAD viewer extension for build123d and cadquery models, for example, is also built on Three.js. https://github.com/bernhard-42/vscode-ocp-cad-viewer
New Hydrogel Turns Toxic Wastewater and Algae Blooms into Garden Gold
Whey protein fibrils, Yeast-laden hydrogels, brewing waste, oleophilic hemp aerogels, chitosan and magnesium;
But not real gold. For that, you'd need whey protein;
From "Turning waste into gold" (2024) https://www.sciencedaily.com/releases/2024/02/240229124612.h... :
> Researchers have recovered gold from electronic waste. Their highly sustainable new method is based on a protein fibril sponge, which the scientists derive from whey, a food industry byproduct. ETH Zurich researchers have recovered the precious metal from electronic waste [in solution]
From "Brewing tea removes lead from water" https://news.ycombinator.com/item?id=43165095 :
> "Yeast-laden hydrogel capsules for scalable trace lead removal from water" (2024) https://pubs.rsc.org/en/content/articlelanding/2024/su/d4su0...
> "Application of brewing waste as biosorbent for the removal of metallic ions present in groundwater and surface waters from coal regions" (2018) https://www.sciencedirect.com/science/article/abs/pii/S22133...
Chitosan + Magnesium or Hydrogels; which given efficiency and sustainability as criteria?
"Simultaneous ammonium and phosphate removal with Mg-loaded chitosan carbonized microsphere: Influencing factors and removal mechanism" (2023) https://www.sciencedirect.com/science/article/abs/pii/S00139...
ScholarlyArticle: "Molecular Insights into Novel Struvite–Hydrogel Composites for Simultaneous Ammonia and Phosphate Removal" (2025) https://pubs.acs.org/doi/10.1021/acs.est.4c11700
Supporting Information for "Molecular Insights into Novel Struvite– Hydrogel Composites for Simultaneous Ammonia and Phosphate Removal" (2025) https://acs.figshare.com/ndownloader/files/54930495
Publish a Python Wheel to GCP Artifact Registry with Poetry
GCP Artifact Registry is an OCI Container Image Registry.
It looks like there there are a few GitHub Actions for pushing container image artifacts to GCP Artifact Registry: https://github.com/marketplace?query=artifact+registry&type=...
FWIW, though it may not be necessary for a plain Python package, "pypa/cibuildwheel" is the easy way to build a Python package for various platforms in CI.
SLSA.dev, Sigstore;
GitHub supports artifact attestations and storing attestations in an OCI Image store FWIU. Does GCP Artifact Registry support attestations?
"Using artifact attestations to establish provenance for builds" https://docs.github.com/en/actions/security-for-github-actio...
> GCP Artifact Registry is an OCI Container Image Registry.
That is one of the supported formats (and maybe most common), but not the only one.
https://cloud.google.com/artifact-registry/docs/supported-fo...
The Python one behaves just like PyPI, you just need to specify the URL provide credentials.
GitHub specifically doesn't have Python package index (PEP 503, PEP 740) support on their roadmap: https://github.com/github/roadmap/issues/94#issuecomment-158...
GitLab has Python package registry support (503?): https://docs.gitlab.com/user/packages/pypi_repository/
Gitea has Python package registry support (503?): https://docs.gitea.com/usage/packages/pypi
PyPI supports attestations for Python packages when built by Trusted Publishers: https://docs.pypi.org/attestations/ :
> PyPI uses the in-toto Attestation Framework for the attestations it accepts. [ in-toto/attestation spec: https://github.com/in-toto/attestation/blob/main/spec/README... ]
> Currently, PyPI allows the following attestation predicates:
> SLSA Provenance, PyPI Publish
Artifact Registry > Artifact Registry documentation > Guides > Manage Python packages: https://cloud.google.com/artifact-registry/docs/python/manag... :
> [Artifact Registry] private repositories use the canonical Python repository implementation, the simple repository API (PEP 503), and work with installation tools like pip.
PEP 503 – Simple Repository API: https://peps.python.org/pep-0503/
PEP 740 – Index support for digital attestations: https://peps.python.org/pep-0740/
Quantum physicists unveil most 'trustworthy' random-number generator yet
What is the bitrate?
Presumably it has passed the NIST randomness tests in paranoid_crypto.
From https://news.ycombinator.com/item?id=43497414 :
> 100Gbit/s is faster than qualifying noise from a [90 meters large] quantum computer?
>> google/paranoid_crypto.lib.randomness_tests
Window-sized device taps the air for safe drinking water
AKA dehumidifier
[deleted]
CISOs urged to push vendors for roadmaps on post-quantum cryptography readiness
How do people test post-quantum cryptography? How do they verify their encryption can not be defeated by quantum computing without having access to real world quantum computing? Are they basing everything off theories?
Best be ready, for Q-day:
> The looming ‘Q-Day’ should also be used as the stick to get approval to carry out a cryptographic inventory and roll out projects that foster cryptographic agility more generally.
> Q-Day will not be announced and businesses need to take action now in the face of a growing threat.
[...]
> “An orderly transition will cost less than emergency planning,” Holmqvist said. “It’s like Y2K but without an actual date.”
Q-Day is in a superimposed state! It's everyday from today to the heat death of the universe. We won't know for sure until we measure it.
Could be software, Could be hardware. AI could be making it sooner, nearer in time.
Practically,
How to add the PQ library or upgrade to the version with PQ ciphers?
How to specify that PQ Ciphers are optional in addition to non-PQ Ciphers (TLS 1.3, downgrade risk) or necessary? Where is the configuration file with the cipher list parameter?
Despite Rising Concerns, 95% of Organizations Lack a Quantum Computing Roadmap
"CISOs urged to push vendors for roadmaps on post-quantum cryptography readiness" https://news.ycombinator.com/item?id=44226486
The first part of the roadmap: Build a useful Quantum Computer;)
How about this:
"New Quantum Algorithm Factors Numbers with One Qubit" (2025-06) https://news.ycombinator.com/item?id=44225120
Or this one:
"How Close Is Commercial Quantum Computing?" (2025-06) https://news.ycombinator.com/item?id=44226205
I read all of Cloudflare's Claude-generated commits
> Reading through these commits sparked an idea: what if we treated prompts as the actual source code? Imagine version control systems where you commit the prompts used to generate features rather than the resulting implementation.
Please god, no, never do this. For one thing, why would you not commit the generated source code when storage is essentially free? That seems insane for multiple reasons.
> When models inevitably improve, you could connect the latest version and regenerate the entire codebase with enhanced capability.
How would you know if the code was better or worse if it was never committed? How do you audit for security vulnerabilities or debug with no source code?
My work has involved a project that is almost entirely generated code for over a decade. Not AI generated, the actual work of the project is in creating the code generator.
One of the things we learned very quickly was that having generated source code in the same repository as actual source code was not sustainable. The nature of reviewing changes is just too different between them.
Another thing we learned very quickly was that attempting to generate code, then modify the result is not sustainable; nor is aiming for a 100% generated code base. The end result of that was that we had to significantly rearchitect the project for us to essentially inject manually crafted code into arbitrary places in the generated code.
Another thing we learned is that any change in the code generator needs to have a feature flag, because someone was relying on the old behavior.
> One of the things we learned very quickly was that having generated source code in the same repository as actual source code was not sustainable.
Keeping a repository with the prompts, or other commands separate is fine, but not committing the generated code at all I find questionable at best.
If you can 100% reproduce the same generated code from the same prompts, even 5 years later, given the same versions and everything then I'd say "Sure, go ahead and don't saved the generated code, we can always regenerate it". As someone who spent some time in frontend development, we've been doing it like that for a long time with (MB+) generated code, keeping it in scm just isn't feasible long-term.
But given this is about LLMs, which people tend to run with temperature>0, this is unlikely to be true, so then I'd really urge anyone to actually store the results (somewhere, maybe not in scm specifically) as otherwise you won't have any idea about what the code was in the future.
Temperature > 0 isn’t a problem as long as you can specify/save the random seed and everything else is deterministic. Of course, “as long as” is still a tall order here.
My understanding is that the implementation of modern hosted LLMs is nondeterministic even with known seed because the generated results are sensitive to a number of other factors including, but not limited to, other prompts running in the same batch.
Gemini, for example, launched implicit caching on or about 2025-05-08: https://developers.googleblog.com/en/gemini-2-5-models-now-s... :
> Now, when you send a request to one of the Gemini 2.5 models, if the request shares a common prefix as one of previous requests, then it’s eligible for a cache hit. We will dynamically pass cost savings back to you, providing the same 75% token discount.
> In order to increase the chance that your request contains a cache hit, you should keep the content at the beginning of the request the same and add things like a user's question or other additional context that might change from request to request at the end of the prompt.
From https://news.ycombinator.com/item?id=43939774 re: same:
> Does this make it appear that the LLM's responses converge on one answer when actually it's just caching?
A Lean companion to Analysis I
A Lean textbook!
Why no HoTT, though?
"Should Type Theory (HoTT) Replace (ZFC) Set Theory as the Foundation of Math?" https://news.ycombinator.com/item?id=43196452
Additional Lean resources from HN this week:
"100 theorems in Lean" https://news.ycombinator.com/item?id=44075061
"Google-DeepMind/formal-conjectures: collection of formalized conjectures in lean" https://news.ycombinator.com/item?id=44119725
> Why no HoTT, though?
Sort of a weird question to ask imo.
Terrence Tao has a couple of analysis textbooks and this is his companion to the first of those books in Lean. He doesn’t have a type theory textbook, so that’s why no higher-order type theory - it’s not what he’s trying to do at all.
If HoTT is already proved and sets, categories, and types are already proven; I agree that it's not necessary to prove same in an applied analysis book; though it is another opportunity to verify HoTT in actual application domains.
"Is this consistent with HoTT?" a tool could ask.
But none of this is what he’s trying to do.
He wrote a book to go with his course in undergraduate real analysis. <= this does not contain HoTT because HoTT is not part of undergraduate real analysis
He’s making a companion to that book for lean <= so this also doesn’t contain HoTT.
Just like it doesn’t contain anything about raising partridges or doing skydiving, it doesn’t have anything about HoTT because that’s not what he’s trying to write a book about. He’s writing a lean companion to his analysis textbook. I get that you are interested in HoTT. If so you should probably get a book on it. This isn’t that book.
He is a working mathematician showing how a proof assistant can be used as part of accomplishing a very specific mainstream mathematical task (ie proving the foundational theorems of analysis). He's not trying to write something about the theoretical basis for proof assistants.
Are there types presented therein?
Presumably, left-to-right MRO solves for diamond-inheritance because of a type theory.
I suppose it doesn't matter if HoTT is the most sufficient type / category / information-theoretic set theory for inductively-presented real analysis in classical spaces at all.
But then why present in Lean, if the precedent formalisms are irrelevant? (Fairly also, is HoTT proven as a basis for Lean itself, though?)
Are there types presented therein?
No. Analysis typically is presented using naive set theory to build the natural numbers, integers, rationals and (via Dedekind cuts) real numbers. For avoidance of doubt in the standard construction of maths these are sets, not types. Then from there a standard first course in analysis deals with the topology of the reals, sequences and series, limits and continuity of real functions, derivatives and the integral. Then if people study analysis further it would typically involve complex numbers and functions, how you do calculus with them, Fourier analysis etc, but types and type theory wouldn’t form part of any standard treatment of analysis that I’m aware of.Types are not a mainstream topic in pure maths. Type theory was part of Bertrand Russell’s attempt to solve the problems of set theory, which ended up as a bit of a sideshow because ZF/ZFC and naive set theory turned out to require far less of a rewrite of all the rest of maths and so became the standard approach. Types come up I think if people go deeply into formal logic or category theory (neither of which is a type of analysis), but category theory is touched on in abstract algebra after dealing with the more conventional topics (Groups, rings, fields, modules) sometimes. Most people I know who know about type theory came at it from a computer science angle rather than pure maths. You might learn some type theory if you do the type of computer science course where you learn lambda calculus.
why present in Lean, if the precedent formalisms are irrelevant?
If someone gave a powerpoint presentation, do they have to use the presentation to talk about how powerpoint is made? If someone writes a paper in latex, does the paper have to be about latex and mathematical typesetting? Or are those tools that you’re allowed to use to accomplish some goal?He’s presenting in Lean because that’s the proof assistant that he actually uses and he’s interested in proof assistants and other tools and how they help mathematicians. He’s not presenting about lean and how it’s made. He’s showing how you can use it to do proofs in analysis.
Well, FWIU, what you refer to as "topology" is founded in HoTT "type theory".
> for avoidance of doubt in the standard construction of maths these are sets, not types. Then from there a standard first course in analysis deals with the topology of the reals, sequences and series, limits and continuity of real functions, derivatives and the integral.
HoTT is precedent to these as well.
So, Lean isn't proven with HoTT either.
Is type theory useful in describing a Hilbert hotel infinite continuum of Reals or for describing quantum values like qubits and qudits?
I don't think I'm overselling HoTT as a useful axiomatic basis for things proven in absence of type and information-theoretic foundations.
From https://news.ycombinator.com/item?id=43191103#43201742 , which I may have already linked to in this thread :
> "Should Type Theory Replace Set Theory as the Foundation of Mathematics?" (2023) https://link.springer.com/article/10.1007/s10516-023-09676-0
That would include Real Analysis (which indeed one isn't at all obligated to prove with a prover and a language for proofs down to {Null, 0, 1} and <0.5|-0.8>).
Are types checked at runtime, with other Hoare logic preconditions and postconditions?
U.K. lab promises air conditioner revolution without polluting gases
> barocaloric solids
Barocaloric material: https://en.wikipedia.org/wiki/Barocaloric_material
Show HN: GPT image editing, but for 3D models
Hey HN!
I’m Zach one of the co-founders of Adam (https://www.adamcad.com). We're building AI-powered tools for CAD and 3D modeling [1].
We’ve recently been exploring a new way to bring GPT-style image editing directly into 3D model generation and are excited to showcase this in our web-app today. We’re calling it creative mode and are intrigued by the fun use cases this could create by making 3D generation more conversational!
For example you can put a prompt in such as “an elephant” then follow it up by “have it ride a skateboard” and it preserves the context, identity and maintains consistency with the previous model. We believe this lends itself better to an iterative design process when prototyping creative 3D assets or models for printing.
We’re offering everyone 10 free generations to start (ramping up soon!). Here’s a short video explaining how it works: https://www.loom.com/share/cf9ab91375374a4f93d6cc89619a043b
We’d also love you to try our parametric mode (free!) which uses LLMs to create a conversational interface for solid modeling as touched on in a recent HN thread [2]. We are leveraging the code generation capabilities of these models to generate OpenSCAD code (an open-source script based CAD) and are surfacing the variables as sliders the user can toggle to adjust their design. We hope this can give a glimpse into what it could be like to “vibe-CAD”. We will soon be releasing our results on Will Patrick's Text to CAD eval [3] and adding B-rep compatible export!
We’d love to hear what you think and where we should take this next :)
[1]https://x.com/zachdive/status/1882858765613228287
[2]https://news.ycombinator.com/item?id=43774990
[3]https://willpatrick.xyz/technology/2025/04/23/teaching-llms-...
Build123d and Cadquery procedural CAD output and input support would be a cool feature.
Copilot + GPT-4 can generate build123d Python code; procedural CAD genai. It's spatially incoherent sometimes like there's not much RLHF for 3d in the coding LLM model, but the code examples are close to compiling and helpful. There's also an ocp-vscode extension to render build123d and cadquery CAD models in a tab in the vscode IDE: https://build123d.readthedocs.io/en/latest/external.html#ocp...
PartCAD is probably useful for your application as well.
Are there metadata standards for CAD parts, assemblies, and scenes? For example in PartCAD or in a BIM Building Information Model that catalogs part numbers and provenance for the boardroom hinge screws and maybe also 3D-printable or otherwise fab-able CAD models?
nething is an LLM with build123d output: https://nething.xyz/
FWIU it's possible to transform B-rep (boundary representation) to NURBS with a different tool.
B-rep: https://en.wikipedia.org/wiki/Boundary_representation
NURBS: https://en.wikipedia.org/wiki/Non-uniform_rational_B-spline
glTF 2.0 > Adoption of: https://en.wikipedia.org/wiki/GlTF#Adoption_of_glTF_2.0
PartCAD > Features > Generative AI: https://partcad.readthedocs.io/en/latest/features.html#gener... :
> Individual parts, assemblies and scenes can also can be exported into 3D model file formats, including: STEP, BREP, STL, 3MF, ThreeJS, OBJ, GLTF, IGES
Thanks! We've done a good amount of experimenting with cadquery and build123d and actually got it to the point where it can reliably generate compiling code, the problem is the generation quality is still more limited than openscad. We may still release it so that users can export to STEP/BREP formats but we have been going back and forth internally on how to highly to prioritize it. We definitely want to build about the ability to generate multiple parts in parallel and assemblies. Thanks for all this :)
Multiple export formats ftw, watermarking, and metadata
(/?hnlog "NURBS") ... https://news.ycombinator.com/item?id=40131766 :
> ai-game-development tools lists a few CAD LLM apps like blenderGPT and blender-GPT: https://github.com/Yuan-ManX/ai-game-development-tools#3d-mo...
Justifying OCCT CAD as an alternative; from comments on a post re: "GhostSCAD: Marrying OpenSCAD and Golang" https://news.ycombinator.com/item?id=30940563 :
> > CadQuery's CAD kernel Open CASCADE Technology (OCCT) is much more powerful than the CGAL used by OpenSCAD. Features supported natively by OCCT include NURBS, splines, surface sewing, STL repair, STEP import/export, and other complex operations, in addition to the standard CSG operations supported by CGAL
> Ability to import/export STEP and the ability to begin with a STEP model, created in a CAD package, and then add parametric features. This is possible in OpenSCAD using STL, but STL is a lossy format.
> [...] CadQuery scripts can build STL, STEP, and AMF faster than OpenSCAD.
AMF: Additive manufacturing file format: https://en.wikipedia.org/wiki/Additive_manufacturing_file_fo...
Watermarking:
/? synthID site:github.com https://www.google.com/search?q=SynthID+site%3Agithub.com
/? SynthID site:github.com inurl:awesome https://www.google.com/search?q=SynthID+site%3Agithub.com+in... :
- and-mill/Awesome-GenAI-Watermarking: https://github.com/and-mill/Awesome-GenAI-Watermarking
CAD Metadata; CAD Linked Data:
/? "CAD" Metadata standards: https://www.google.com/search?q="CAD"+metadata+standards
/? CAD 3d design formats and their metadata and RDF: https://www.google.com/search?q=CAD+3d+design+formats+and+th...
- OCX; shipbuilding BIM metadata; https://3docx.org/en/
/? "CAD" "BIM" "RDF": https://www.google.com/search?q=%22CAD%22+%22BIM%22+%22RDF%2... :
- "Scan-to-graph: Semantic enrichment of existing building geometry" (2020) https://www.sciencedirect.com/science/article/abs/pii/S09265... .. https://scholar.google.com/scholar?cites=1199367363418102338... :
> Scan-to-Graph focuses on creating an RDF-based model of an existing asset.
schema.org/CreativeWork > :Drawing, :SoftwareSourceCode
YAML-LD is JSON-LD (RDF) in YAML; with a W3C spec.
PartCAD is plain YAML; but there could probably be an @context to map each YAML metadata attribute to a URI so that Linked Data can be parsed and indexed (for example by search engines which index some schema.org RDFS Classes and Properties).
PartCAD index / electronics / sbcs / partcad.yml: https://github.com/partcad/partcad-index/blob/main/electroni...
partcad.yml: partcad/partcad-electronics-sbcs-intel/blob/main/partcad.yaml: https://github.com/partcad/partcad-electronics-sbcs-intel/bl...
In construction, the building inspector requires the spec sheet for basically every part of a house.
Linked Data is presumably advantageous for traceability given linked provenence metadata from CAD design (w or w/o AI), part manufacturing w/ batch number, assembly assemblage, and supply chain lineage.
Properties for a https://schema.org/CreativeWork > :CADModel RDFS Class to increase reusability?:
- Property: isParametric: bool
- Property: parameteicSize
- Property: parameters: [...]
- Property: ~ sourceCodeOf ; source URL w/ (git) revhash or version tag
- Property: ~ signedOffOnBy ( to W3C Verifiable Credentials)
- Property: ~ promptsAndModels
SBOM Software Bill of Materials tools can work with and generate JSON-LD RDF for the software found installed on containers, VMs, and hosts. The same for CAD models would be advantageous.
So, identify a portable software packaging format manifest that already supports cryptographic signatures (for https://SLSA.dev e.g. with https://sigstore.dev/ ) with additional (schema.org RDFS in RDFa HTML,) Linked Data properties to skip having to grep for whether the described and indexed CAD model is parametric with respect to size, and where its origin is for example:
- Property: originPoint and signed axes layout
Show HN: Air Lab – A portable and open air quality measuring device
Hi HN!
I’ve been working on an air quality measuring device called Air Lab for the past three years. It measures CO2, temperature, relative humidity, air pollutants (VOC, NOx), and atmospheric pressure. You can log and analyze the data directly on the device — no smartphone or laptop needed.
To better show what the device can do and how it feels like, I spent the past week developing a web-based simulator using Emscripten. It runs the stock firmware with most features available except for networking. Check it out and let me know what you think!
The firmware will be open-source and available once the first batch of devices ships. We’re currently finishing up our crowdfunding campaign on CrowdSupply. If you want to get one, now is the time to support the project: https://www.crowdsupply.com/networked-artifacts/air-lab
We started building the Air Lab because most air quality measuring devices we found were locked-down or hard to tinker with. Air quality is a growing concern, and we’re hoping a more open, playful approach can help make the topic more accessible. It is important to us that there is a low bar for customizing and extending the Air Lab. Until we ship, we plan to create rich documentation and further tools, like the simulator, to make this as easy as possible.
The technical: The device is powered by the popular ESP32S3 microcontroller, equipped with a precise CO2, temperature, and relative humidity sensor (SCD41) as well as a VOC/NOx (SGP41) and atmospheric pressure sensor (LPS22). The support circuitry provides built-in battery charging, a real-time clock, an RGB LED, buzzer, an accelerometer, and capacitive touch, which makes Air Lab a powerful stand-alone device. The firmware itself is written on top of esp-idf and uses LVGL for rendering the UI.
If you seek more high-level info, here are also some videos covering the project: - https://www.youtube.com/watch?v=oBltdMLjUyg (Introduction) - https://www.youtube.com/watch?v=_tzjVYPm_MU (Product Update)
Would love your feedback — on the device, hardware choices, potential use cases, or anything else worth improving. If you want to get notified on project updates, subscribe on Crowd Supply.
Happy to answer any questions!
I wish it would have support for Zigbee so I can pair it with other open data aggregation systems like Home Assistant. AirGradient, another cool air quality monitor, for example, does not have this.
Matter protocol support would also be a useful feature.
Potential integration: Run HVAC fans and/or an attic fan and/or a crawlspace fan if indoor AQI is worse than outdoor AQI
This says that Air Quality Sensor support was added to matter protocol in 2023: https://csa-iot.org/newsroom/matter-1-2-arrives-with-nine-ne... :
> Air Quality Sensors – Supported sensors can capture and report on: PM1, PM2.5, PM10, CO2, NO2, VOC, CO, Ozone, Radon, and Formaldehyde. Furthermore, the addition of the Air Quality Cluster enables Matter devices to provide AQI information based on the device’s location
/? matter protocol Air Quality Cluster: https://www.google.com/search?q=matter+protocol+Air+Quality+...
We'll definitely look into supporting Matter in the future, as it would allow integration with the most common home automation platforms/apps out there.
FWIU Thread and Matter work better when there is a "Border Router" ('hub') in the system; https://news.ycombinator.com/item?id=32167256#32186688
United States Digital Service Origins
Hopefully these services can be restored in 3.5 years. One of the greatest things to happen to aging infrastructure of this country in the last 10 years as a lot of our systems are based on aging military systems.
We just need it, there’s no question of the benefits and there were no negatives to speak about.
Thank you President Barack Obama! A true leader and patriot.
I mean, I liked what they made but its kinda sad that greatest thing to happen to our nation's infrastructure was some nice websites.
Meanwhile, healthcare housing and education got way more expensive and taxes for the wealthy went down.
And a lot of bridges fell into disrepair, roads get worse, etc. Gridlock has made funding anything pretty hard in the last decade, and certain parties are so anti spending they won't try to fix it
Anti spending? The deficit has increased under every republican president almost back to ww2. Their two Santa strategy seems to work well confusing people though.
/? https://www.google.com/search?q=The+deficit+has+increased+un... :
- History of the United States public debt: https://en.wikipedia.org/wiki/History_of_the_United_States_p...
- https://www.politifact.com/factchecks/2019/jul/29/tweets/rep... (2019) :
> The deficit is the difference between the money that the government makes and the money it spends. If the government spends more than it collects in revenues, then it’s running a deficit.
> The federal debt is the running total of the accumulated deficits. [Or surpluses]
"Federal Surplus or Deficit [-] (FYFSD)" https://fred.stlouisfed.org/series/FYFSD#
"Federal Surplus or Deficit [-] as Percent of Gross Domestic Product (FYFSGDA188S)" https://fred.stlouisfed.org/series/FYFSGDA188S
Dr. Sbaitso
Dr. Sbaitso: https://en.wikipedia.org/wiki/Dr._Sbaitso
...
Clean Language questions might be good.
Clean Language: https://en.wikipedia.org/wiki/Clean_language
From https://news.ycombinator.com/item?id=40085678 :
> Clean Language questions : https://cleanlearning.co.uk/blog/discuss/clean-language-ques...
Bootstrapping HTTP/1.1, HTTP/2, and HTTP/3
From the article; the HTTP Request header that results in upgrade to HTTP/3:
alt-svc: h3=":443", h2=":443"
> The HTTP Alt-Svc header it received in packet 25 included the directive h3=":443", so when we then reloaded the page (note: not shift-reload, which would have caused Chrome to "forget" the Alt-Svc for this site), Chrome could switch over to QUIC (packets 31 onwards) and then make the request using HTTP/3 (packets 44-45).Show HN: GribStream – Query Weather Forecasts Like a Database
Hey HN,
I shared GribStream here a while ago when it was just a NOAA forecast API. It’s grown into something bigger: a full weather data archive you can query just like a database, by time (with as-of/time-travel!), location, or more interestingly by specific weather conditions.
Use built-in expressions directly in your API requests to calculate derived metrics like wind chill, dew point, or more complex like crop-specific photothermal units capture (see demos below). Computation is fully server-side, ideal for web apps, dashboards, or mobile apps with limited resources.
Demos ( https://gribstream.com/demo ): NBM Snow Accumulation: Real-time ski conditions across North America HRRR Corn Growth Simulator: Predict crop development stages precisely GFS Storm Chaser: Track storms globally using dynamic filtering NBM Wind Field Explorer: Explore wind patterns interactively
Other Improvements: Added models: NBM, GFS, HRRR, RAP, GEFS, CFS, and Google's GraphCast GFS Bounding box queries at custom resolution. Perfect for interactive maps Significant price reduction: over 50% cheaper, plus 90% discount for cached requests Efficient bulk queries (up to 500 points per request) at same quota cost Comprehensive Quickstart guide and OpenAPI spec for easier onboarding
Sneak peek at what I'm currently doing: Realtime streaming of grib files as video. This is raw, not productionized at all, just an alpha-toy but it is cool so I'd like to share a few examples. This does no caching at all, it is generated on the fly. So you can play with the times, try other weather variables, tweak the scaling and framerate. I'll be monitoring networking in case I need to shut it down in a hurry, please be patient. Demo for Hurricane Milton
Wind speed at 80m
https://gribstream.com/video?fromTime=2024-10-08T00:00:00Z&UntilTime=2024-10-12T00:00:00Z&name=WIND&level=80%20m%20above%20ground&info=&scaleMin=0&scaleMax=40&fps=6
Convective available potential energy at the surface
https://gribstream.com/video?fromTime=2024-10-08T00:00:00Z&UntilTime=2024-10-12T00:00:00Z&name=CAPE&level=surface&info=&scaleMin=0&scaleMax=3000&fps=6
# wave height milton
https://gribstream.com/video?fromTime=2024-10-08T00:00:00Z&UntilTime=2024-10-12T00:00:00Z&name=HTSGW&level=surface&info=&scaleMin=0&scaleMax=7&fps=6
Coming Soon:
Aggregations over time/space
Lookups by city, zipcode, airport, or custom shapes
New response formats (Parquet, PNG, MP4)
Threshold-based notifications (webhooks/emails)
Full GEFS/CFS ensemble data for probabilistic forecastsSide-project hunting? If you're looking for your next indie-hack, here are a few ideas that GribStream makes ridiculously easy: Agriculture: Crop growth modeling and irrigation planning Renewable Energy: Forecasting for solar and wind energy production Logistics: Weather-informed routing and delivery scheduling Insurance: Risk modeling based on historical weather patterns Event Planning: Scheduling and resource allocation for outdoor events
Hopefully DOGE will let NOAA keep running smoothly so I can ship faster!
Would appreciate your feedback, feature requests, or any ideas on what you'd love to see next.
Thank you!
Was going to mention GenCast and then realized you already have the models live.
Herbie support for the GribStream API might be worthwhile; https://news.ycombinator.com/item?id=42470493 :
> Are there error and cost benchmarks for these predictive models?
ENH: TODO list / PIM integration potential: tasks notifications for specifically labeled tasks for when it's going to rain, going to be dry for days; when to plant, mow, work on construction projects in various states.
todo.txt labels: +goingtorain +goingtobedry
FarmOS is an open source agtech product that could have Historical and Predictive weather built-in.
"Show HN: We open-sourced our compost monitoring tech" https://news.ycombinator.com/item?id=42201207 ; compost and mulch businesses can save money by working with the rain
https://news.ycombinator.com/item?id=42201251 ; "crop monitoring system" site:github.com , digital agriculture, precision agriculture, SIEM
"Satellite images of plants' fluorescence can predict crop yields" https://news.ycombinator.com/item?id=40234890
Having been inundated with ads on weather apps that must afford notifications somehow, I've tried a few weather apps; TWC, AccuWeather; Wx, WeatherMaster, WeatherRadar
Wx is too complex for the average bear. (Where is animated local radar, for example?)
WeatherMaster is nice and open source, but isn't on FDroid and doesn't have animated radar
WeatherRadar is open source, on FDroid, has a chart that shows {Precipitation and Hi and Lo} all on the one chart, and already requires users to get and configure a uniquely-identifying third-party API token.
Radio Astronomy Software Defined Radio (Rasdr)
Are there Rydberg antenna SDRs for radio astronomy?
What sensitivity (?) is necessary to navigate by the EMF of stars?
FWIU this is called astronometry? There's probably a better word than "astral"?:
From https://news.ycombinator.com/item?id=44054783 :
> How many astral signals does a receiver need to fix to determine lat/long/altitude given the current time?
> How many astral signals does a receiver need to fix to determine to infer the current time, given geometrically-impossible triangulation and trilateration solutions given the known geometry of the cosmos and the spherical shape of the earth?
Microsandbox: Virtual Machines that feel and perform like containers
Thanks for sharing!
I'm the creator of microsandbox. If there is anything you need to know about the project, let me know.
This project is meant to make creating microvms from your machine as easy as using Docker containers.
Ask me anything.
Only did a quick skim of the readme, but a few questions which I would like some elaboration.
How is it so fast? Is it making any trade offs vs a traditional VM? Is there potential the VM isolation is compromised?
Can I run a GUI inside of it?
Do you think of this as a new Vagrant?
How do I get data in/out?
> How is it so fast? Is it making any trade offs vs a traditional VM? Is there potential the VM isolation is compromised?
It is a lighweight VM and uses the same technology as Firecracker
> Can I run a GUI inside of it?
It is planned but not yet implemented. But it is absolutely possible.
> Do you think of this as a new Vagrant?
I would consider Docker for VMs instead. In a similar way, it focuses on dev ops type use case like deplying apps, etc.
> How do I get data in/out?
There is an SDK and server that help does that and file streaming is planned. But right now, you can execute commands in the VM and get the result back via the server
> I would consider Docker for VMs instead.
Native Containers would probably solve here, too.
From https://news.ycombinator.com/item?id=43553198 :
>>> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/
And also from that thread:
> How should a microkernel run (WASI) WASM runtimes?
What is the most minimal microvm for WASM / WASI, and what are the advantages to running WASM workloads with firecracker or microsandbox?
> What is the most minimal microvm for WASM / WASI,
By setting up an image with wasmtime for example.
> and what are the advantages to running WASM workloads with firecracker or microsandbox?
I can think of stronger isolation or when you have legacy stuff you need to run alongside.
From https://e2b.dev/blog/firecracker-vs-qemu
> AWS built [Firecracker (which is built on KVM)] to power Lambda and Fargate [2], where they need to quickly spin up isolated environments for running customer code. Companies like E2B use Firecracker to run AI generated code securily in the cloud, while Fly.io uses it to run lightweight container-like VMs at the edge [4, 5].
"We replaced Firecracker with QEMU" (2023) https://news.ycombinator.com/item?id=36666782
"Firecracker's Kernel Support Policy" describes compatible kernel configurations; https://github.com/firecracker-microvm/firecracker/blob/main...
/? wasi microvm kernel [github] https://www.google.com/search?q=wasi+microvm+kernel+GitHub :
- "Mewz: Lightweight Execution Environment for WebAssembly with High Isolation and Portability using Unikernels" (2024) https://arxiv.org/abs/2411.01129 similar: https://scholar.google.com/scholar?q=related:b3657VNcyJ0J:sc...
MCP Jupyter: AI-powered Jupyter collaboration
Re: the _ai_repr_() Jupyter JEP and also MCP, and _repr_jsonld_: https://github.com/jupyter/enhancement-proposals/pull/129#is...
Efficient superconducting diodes and rectifiers for quantum circuitry
ScholarlyArticle: "Efficient superconducting diodes and rectifiers for quantum circuitry" (2025) https://www.nature.com/articles/s41928-025-01375-5
NewsArticle: "Superconducting diode bridge efficiently converts AC to DC for quantum circuits" https://phys.org/news/2025-05-superconducting-diode-bridge-e... :
> Their superconducting diode bridge, introduced in a paper published in Nature Electronics, was found to perform remarkably well at cryogenic temperatures, achieving rectification efficiencies as high as 42% ± 5%.
Yeah, what I see is
The bridge can function as a full-wave rectifier with an efficiency
up to 42 ± 5%, and offers alternating current (a.c.) to direct current
(d.c.) signal conversion capabilities at frequencies up to 40 kHz
ordinary bridge rectifiers are almost twice as efficient as that, it's an unusual question to be concerned about how high frequency they can work at, past 10 kHz the switching time of diodes is a problem but I hear you can get diodes that switch around 1 MHz if you had to.https://electronics.stackexchange.com/questions/152225/what-...
DC current is 0 Hz.
Isn't 110V AC typically 0.06 kHz (60 Hz) in the US?
0.06 kHz < 40 kHz
Yeah, people usually use bridge rectifiers with 50 or 60 Hz or maybe 400 Hz in aviation applications.
However, the way people build power supplies has changed completely since the 1970s when I was a kid reading ham radio books that told you to get a big transformer, hook it to a bridge rectifier, and have a filter with some large capacitors in it. Today the state of the art is
https://en.wikipedia.org/wiki/Switched-mode_power_supply
https://en.wikipedia.org/wiki/Buck%E2%80%93boost_converter
My son was looking for an 18V-1A power supply for a guitar gadget and looking around the house he told me "there doesn't seem to be any connection with the voltage and current and the size of the supply" and it's precisely because of that revolution. Even if the input power is 60 Hz the power supply switches at 100 kHz or more which means the filter network is vastly smaller and you don't have the really dangerous big capacitors you might have seen in the power supply of a PDP-11 or something like that.
Are there smaller AC to USB-C PD power adapters than GaN?
GaN > Transistors and power ICs: https://en.wikipedia.org/wiki/Gallium_nitride#Transistors_an...
Rectifiers > Rectifier circuits, Rectification technologies: https://en.wikipedia.org/wiki/Rectifier#Rectification_techno...
https://news.ycombinator.com/item?id=40693984#40720147 :
> Is there a rectifier for gravitational waves, and what would that do?
Basically just abs() absolute value eh
IDK what the SOTA efficiency of these is: "Nanoscale spin rectifiers for harvesting ambient radiofrequency energy" https://news.ycombinator.com/item?id=43234022
BespokeSynth is an open source "software modular synth" DAW that can host LV2 and VST3 plugins like Guitarix, which can also add signal transforms like guitar effects pedals. Tried searching for an apparently exotic 1A universal power supply. Apparently also exotic: A board of guitar pedals with IDK one USB-A and a USB-C adapter with OSC and MIDI support; USB MIDI trololo pedals
OpenTPU: Open-Source Reimplementation of Google Tensor Processing Unit (TPU)
Can [OpenTPU] TPUs be fabricated out of graphene, with nanoimprinting or a more efficient approach?
From https://news.ycombinator.com/item?id=42314333 :
>> From "A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
>>> Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.
What about QPUs though?
Can QPUs (Quantum Processing Units) built on with electrons in superconducting graphene ever be faster than photons in integrated nanophotonics?
There are integrated parametric single-photon emitters and detectors.
Is there a lower cost integrated nanophotonic coherent light source for [quantum] computing than a thin metal wire?
"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33493885
Singularities in Space-Time Prove Hard to Kill
My non-canonical and possibly wrong view of black hole singularities is that they can form in finite time (sort of) only in their own frame of reference but in any other frame they require infinite time so from our point of view no singularity ever formed and ever will from. And in practice, in this infinite time something might disrupt their collapse, like getting hit with similarly massive amount of anti-matter an turning into photons which might destabilize the whole process and let them get out in massively energetic event similar to Big Bang. Which I believe was the case with ours, so no white-hole singularity either.
This view is dismissed by physicist because in GR there's no way to unambiguously define simultaneity so they don't event attempt to consider what's "before" and what's "after" regarding remote events in strong gravitational fields so saying that singularity forming is "after" everything else that ever happens in the universe is a hard sell.
We don't know if singularities are even possible. Maybe the universe has some crazy repulsive force when atoms or subatomic particles get really really close (closer than in neutron stars, where atoms are femtometers apart).
You can get this easy by reformulating gravity's effect on spacetime as slowing down the speed of light/causality and putting a natural bound that asymptotically approaches zero. It should agree with GR everywhere except at extremes like black holes.
Looking at gravity as a slowdown of c is appealing because it suggests a computational cost of massive particles. As stuff gets more dense, the clock of the universe must slow down.
GR does not describe the interior topology of black holes, beyond predicting a singularity. Is there a hard boundary with no hair, or is there a [knotted or braided] fluidic attractor system with fluidic turbulence at the boundary?
SQR Superfluid Quantum Relativity seems to suggest that there is no hard event horizon boundary.
I don't understand how any model that lacks descriptions of phase states in BEC superfluids could sufficiently describe the magneto-hydro-thermo-gravito dynamics of a black hole system and things outside of it?
It is unclear whether mass/energy/information is actually drawn into a supermassive or a microscopic black hole; couldn't it be that things are only ever captured into attractor paths that are outside of the event horizon?
Does Hawking radiation disprove that black holes don't absorb mass/energy/information?
Signatures of chiral superconductivity in rhombohedral graphene
"Signatures of chiral superconductivity in rhombohedral graphene" (2025) https://www.nature.com/articles/s41586-025-09169-7 :
> Chiral superconductors are unconventional superconducting states that break time reversal symmetry spontaneously and typically feature Cooper pairing at non-zero angular momentum. Such states may host Majorana fermions and provide an important platform for topological physics research and fault-tolerant quantum computing [1–7]. Despite intensive search and prolonged studies of several candidate systems [8–26], chiral superconductivity has remained elusive so far. Here we report the discovery of robust unconventional superconductivity in rhombohedral tetra- and penta-layer graphene without moiré superlattice effects. [...] We also observed a critical B⊥ of 1.4 Tesla, higher than any graphene superconductivity and indicates a strong-coupling superconductivity close to the BCS-BEC crossover [27]. Our observations establish a pure carbon material for the study of topological superconductivity, with the promise to explore Majorana modes and topological quantum computing.
Are "strange metals" really necessary for the study of topological superconductivity, if comparable effects are now multiply-demonstrated with various forms of graphene?
"'Strange metals' point to a whole new way to understand electricity" (2025) https://news.ycombinator.com/item?id=44087916
'Strange metals' point to a whole new way to understand electricity
so electrons are just like photons being a wave/particle? The article seems to suggest in strange metals their particle properties are absent and only 'electron field' gradients move, like if electrons exhanged their 'charge'.
Electrons are not just like photons. It's tempting to say that, but there are some significant differences that can lead you in error if you think in this picture.
First of all, if you think of a photon as some small ball, not that's not what it is. Mathematically a photon is defined as a state of the EM field (which has been quantised into a set of harmonic oscillators called "normal modes") in which there is exactly one quantum of excitation of a specific normal mode (with given wavevector and frequency). Depending on which kind of modes you consider, a photon could be a gaussian beam, or even a plane wave, so not something localised like you would say of a particle.
Unlike photons, electrons have a position operator, so in principle you can measure and say where one electron is. The same is impossible for photons. Also electrons have a mass, but photon are massless. This means you can have motionless electrons, but this is impossible for photons: they always move at the speed of light. Electrons have a non-relativistic classical limit, while photon do not.
W. E. Lamb used to say that people should be required a license for the use of the word "photon", because it can be very misleading.
Why don't photons have a position operator?
It’s really not accurate to say that a photon has no position at all. How would a photodiode work? You have to be careful with this stuff. https://physics.stackexchange.com/questions/492711/whats-the...
Photons certainly appear to have a real physical location with 1e9 FPS imaging capabilities:
"Visualizing video at the speed of light — one trillion frames per second" (2012) https://youtube.com/watch?v=EtsXgODHMWk&
But is there an identity function for a photon(s), and is "time-polarization" necessary for defining an identity function for photons?
Peer Programming with LLMs, for Senior+ Engineers
What are some of the differences between Peer Programming with LLMs and Vibe Coding?
This is the origin of vibe coding: https://x.com/karpathy/status/1886192184808149383
> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. (...) I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. (...)
Pair programming still very much deals with code and decisions.
So, pair programming continues to emphasize software quality (especially with LLMs) but "vibe coding" is more of a "whoo, I'm a reckless magician" (in a less risky application domain) sort of thing?
But doesn't a 'vibe-coding' "we'll just sort out the engineering challenges later" ensure that there will be re-work and thus less overall efficiency?
Hacker News now runs on top of Common Lisp
So, Hacker News was not rewritten in Common Lisp. Instead they reimplemented the Arc Runtime in Common Lisp.
And that's the sort of thing Lisp excels in
Jupiter was formerly twice its current size, had a much stronger magnetic field
How is the size defined for a gas planet? The gas density just keeps dropping, where do you draw the line (isosurface, rather)? Earth's radius is always the one without earth's atmosphere.
The density falls off pretty steeply at the “edge”, so the exact definition only makes little difference for the radius: https://www.researchgate.net/figure/Density-vs-radius-for-a-...
This is because of Newtonian gravity being inversely proportional to the square of the radius, right?
Gravity changes little over that distance - it's more because of the compounding effect of atmospheric pressure (the deeper you go, the more air you have above you which raises the pressure, raising the density and meaning that pressure increases exponentially faster).
What makes that curve exponential?
Newtonian gravity (classical mechanics).
Two-body gravitational attraction is observed to be an inverse square power law; gravitational attraction decreases with the square of the distance.
g, the gravitational constant of Earth, is observed to be exponential; 9.8 m/s^2.
Atmospheric pressure: https://en.wikipedia.org/wiki/Atmospheric_pressure#:~:text=P... :
> Pressure (P), mass (m), and acceleration due to gravity (g) are related by P = F/A = (m*g)/A, where A is the surface area. Atmospheric pressure is thus proportional to the weight per unit area of the atmospheric mass above that location.
Accelerating Docker Builds by Halving EC2 Boot Time
RUN --mount=type=cache can also significantly reduce build times if there is inter-build locality; i.e. if container build jobs run on the same nodes so that cache mounts can be reused by subsequent build jobs.
Examples of docker image cache mounts:
# yum + dnf
RUN --mount=type=cache,id=yum-cache,target=/var/cache/yum,sharing=shared \
--mount=type=cache,id=dnf-cache,target=/var/cache/dnf,sharing=shared \
# apt
RUN --mount=type=cache,id=aptcache,target=/var/cache/apt,sharing=shared
# pip
RUN --mount=type=cache,id=pip-cache,target=${apphome}/.cache/pip,sharing=shared \
# cargo w/ uid=1000
RUN --mount=type=cache,id=cargocache,target=${apphome}/.cargo/registry,uid=1000,sharing=shared \
"Optimize cache usage in builds" https://docs.docker.com/build/cache/optimize/Starlink being evaluated to enhance GPS PNT capabilities
Aren't there stable astral radio signal sources detectable with ng quantum sensors; such as Rydberg antenna arrays?
How many astral signals does a receiver need to fix to determine the time of day on earth (~theta) given lat/long?
How many astral signals does a receiver need to fix to determine lat/long/altitude given the current time?
How many astral signals does a receiver need to fix to determine to infer the current time, given geometrically-impossible triangulation and trilateration solutions given the known geometry of the cosmos and the spherical shape of the earth?
There is a regular monotonic tick in quantum waves; but not a broadcast planetary time offset/reference?
"Simple Precision Time Protocol at Meta" https://news.ycombinator.com/item?id=39306209 :
> FWIW, from "50 years later, is two-phase locking the best we can do?" https://news.ycombinator.com/item?id=37712506 :
>> TIL there's a regular heartbeat in the quantum foam; there's a regular monotonic heartbeat in the quantum Rydberg wave packet [photoionization] interference; and that should be useful for distributed applications with and without vector clocks and an initial time synchronization service
"Quantum watch and its intrinsic proof of accuracy" (2022) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev...
..
Re: ntpd-rs and higher-resolution network time protocols {WhiteRabbit (CERN), SPTP (Meta)} and NTP NTS : https://news.ycombinator.com/item?id=40785484 :
> "RFC 8915: Network Time Security for the Network Time Protocol" (2020)
Cavity quantum electrodynamics with moiré photonic crystal nanocavity
Would such an integrated nanophotonic light source substitute for quantum dots, for cavity QED QC and sensors?
"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33490730 :
"Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2022) https://doi.org/10.21203/rs.3.rs-1572967/v1
Show HN: Olelo Foil - NACA Airfoil Sim
Hi HN!
A while back, I started exploring ways to make aerodynamic simulation more interactive and visual for the web. I wanted something that felt immediate—intuitive enough for students, fast enough for hobbyists, and hackable enough for engineers. That’s how Olelo Foil was born.
Foil is a browser-based airfoil simulator written in JavaScript using Three.js and WebGL. It lets you interactively explore how airfoils behave under different conditions, all rendered in real time. Right now, it uses simplified fluid models, but I’m working toward integrating Navier-Stokes for more accurate simulations—and I’d love help from anyone interested in fluid dynamics, GPU compute, or numerical solvers.
I’m also building Olelo Honua, an educational platform focused on Hawaiian STEM content and digital tools. Foil is one piece of that larger vision—bringing STEM education into the browser with open, accessible tools.
Check it out, and if you're interested in collaborating (especially on the physics side), I’d love to connect!
Notes re: CFD, Navier Stokes,
"Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations" https://news.ycombinator.com/item?id=31049608
https://github.com/chennachaos/awesome-FEM4CFD?tab=readme-ov...
>> Numerical methods in fluid mechanics: https://en.wikipedia.org/wiki/Numerical_methods_in_fluid_mec...
jax-cfd mentions phiflow
jax-cfd > Other awesome projects: https://github.com/google/jax-cfd#other-awesome-projects
PhiFlow: https://github.com/tum-pbs/PhiFlow/
We had a monochrome green aerodynamic simulation app literally on floppy disks in middle school in approximately 1999 that was still cool then. IIRC various keyboard keys adjusted various parameters of the 2d hull that was tested to eventually - after floppy disc noises - yield a drag coefficient.
TIL that the teardrop shape maximizes volume and minimizes drag coefficient, but some spoiler wings do generate downward lift to maintain traction at speed.
A competitive game with scores and a leaderboard might be effective.
...
Navier-Stokes for compressible and incompressible fluids, but it's a field of vortices with curl so SQG/SQR Superfluid Quantum Gravity / Relativity has Gross-Pitaevskii for modeling emergent dynamics like fluidic attractor systems in exotic states like superfluids and superconductors and supervacuum.
TIL the mpemba effect says that the phase diagram for water is incomplete because one needs the initial water temperature to predict the time to freeze or boil; those have to be manifold charts like HVAC.
There's a Gross-Pitaevskii model of the solar system; gravity results in n-body fluidic vortices which result in and from the motions of the planets and other local masses.
/?hnlog "CFD" :
From "FFT-based ocean-wave rendering, implemented in Godot" https://news.ycombinator.com/item?id=41683990 :
> Can this model a fluid vortex between 2-liter bottles with a 3d-printable plastic connector?
> Curl, nonlinearity, Bernoulli, Navier-Stokes, and Gross-Pitaevskii are known tools for CFD computational fluid dynamics with Compressible and Incompressible fluids.
> "Ocean waves grow way beyond known limits" (2024-09) https://news.ycombinator.com/item?id=41631177#41631975
Also, recently I learned that longitudinal waves in superfluids (and plasmas) are typically faster than transverse standing waves that we observe in fluid at Earth pressures.
FreeBASIC is a free/open source BASIC compiler for Windows DOS and Linux
This one emulates GW-BASIC as PC-BASIC so old BASIC programs for the IBM PC DOS systems can run on modern systems: https://robhagemans.github.io/pcbasic/
FreeBASIC is like Microsoft's QuickBASIC.
More BASIC Languages: https://www.thefreecountry.com/compilers/basic.shtml
It really isn't - from the docs themselves:
FreeBASIC gives you the FreeBASIC compiler program (fbc or fbc.exe),
plus the tools and libraries used by it. fbc is a command line program
that takes FreeBASIC source code files (*.bas) and compiles them into
executables. In the combined standalone packages for windows, the main
executable is named fbc32.exe (for 32-bit) and fbc64.exe (for 64-bit)
The magic of QuickBasic was that it was an editor, interpreter, and help system all rolled up into a single EXE file. Punch F5 and watch your BAS file execute line-by-line.> The magic of QuickBasic was that it was an editor, interpreter, and help system all rolled up into a single EXE file. Punch F5 and watch your BAS file execute line-by-line.
That's still how vscode works; F5 to debug and Ctrl-[Shift]-P like CtrlP.vim: https://code.visualstudio.com/docs/debugtest/debugging
FWICS,
The sorucoder.freebasic vscode extension has syntax highlighting: https://marketplace.visualstudio.com/items?itemName=sorucode...
There's also an QB64Official/vscode extension that has syntax highlighting and keyboard shortcuts: https://github.com/QB64Official/vscode
re: how qb64 and C-edit are like EDIT.COM, and GORILLA.BAS: https://news.ycombinator.com/item?id=41410427
C-edit: https://github.com/velorek1/C-edit
'Edit' - a CLI/TUI text editor similar to EDIT.COM but written in rust - is now open source https://news.ycombinator.com/item?id=44031529
Laser-Induced Graphene from Commercial Inks and Dyes
What is the advantage of making graphene from inks and dyes instead of from flash-heated unsorted recycled plastic or lasered fruit peels?
/? Graphene laser fruit peel: https://www.google.com/search?q=graphene%20laser%20fruit%20p...
This is covered in the Introduction.
They apply the ink/dye to a surface and then use a laser on it, leaving the graphene behind.
> A versatile “Paint & Scribe” methodology is introduced, enabling to integrate LIG tracks onto any wettable surface, and in particular onto printed and flexible electronics. A process for obtaining freestanding and transferrable LIG is demonstrated by dissolving acrylic paint in acetone and floating LIG in water. This advancement offers novel avenues for diverse applications that necessitate a transfer process of LIG.
I still don't understand why that's only possible with commercial inks and dyes but not with also aromatic fruit peels?
Show HN: Turn any workflow diagram into compilable, running and stateful code
Hi HN folks, I'm a co-creator of the Dapr CNCF project and co-founder of Diagrid. Today we announced a free-to-use web app that takes any form of workflow diagram (UML, BPMN, scribble in your favorite drawing tool or even on paper) and generates code that runs in any IDE and that can be deployed to Kubernetes and other container based systems, based on Dapr's durable execution workflow engine. This essentially allows you to run durable workflows in minutes and leaves out the guesswork for how to structure, code and optimize a code-first workflow app. I'm happy for you to give this a try and provide feedback!
This reminds me of the UML/RUP era from the early 2000s.... Is that an attempt to revive or even resurrect UML diagrams and Rational Unified Process blending it with AI? I would bet it's all dead forever. I'm skeptical about diagram-driven development making a comeback. In my experience, developers today prefer more agile, code-first approaches because requirements change rapidly and maintaining diagram-code synchronization is an unbearable challenge.
I believe in UML usefulness as a whiteboard/blackboard language. A fun way to explain what you need or what you imagine to be a good architecture, but that's all, it's a drafting tool. But then, why not using it as a communication tool ? You would draft something on the board, the LLM would generate the program. Sometimes it is simpler to draw 5 rectangles, name then and show their relationships in UML class modeling than to explain it textually.
UML class diagrams in mermaid syntax require less code than just defining actual classes with stubbed attrs and methods in some programming languages.
Years ago I tried ArgoUML for generating plone classes/models, but there was a limit to how much custom code could be round-tripped and/or stuffed into UML XML IIRC.
Similarly, no-code tools are all leaky abstractions: they model with UI metaphors only a subset of patterns possible in the target programming language, and so round-trip isn't possible after adding code to the initial or periodic code-generation from the synced abstract class diagram.
Instead, it's possible to generate [UML class] diagrams from minimal actual code. For example, the graph_models management command in Django-extensions generates GraphViz diagrams from subclasses of django.models.Model. With code to diagram workflow (instead of the reverse), you don't need to try and stuff code in the extra_code attr of a modeling syntax so that the generated code can be patched and/or transformed after every code generation from a diagram.
https://django-extensions.readthedocs.io/en/latest/graph_mod...
I wrote something similar to generate (IDEF1X) diagrams from models for the specified waterfall process for an MIS degree capstone course.
It may be easier to prototype object-oriented code with UML class diagrams in mermaid syntax, but actual code shouldn't be that tough to generate diagrams from.
IIRC certain journals like ACM have their own preferred algorithmic pseudocode and LaTeX macros.
Brandon's Semiconductor Simulator
Which other simulators show electron charge density and heat dissipation?
Can this simulate this?:
"Synaptic and neural behaviours in a standard silicon transistor" (2025) https://www.nature.com/articles/s41586-025-08742-4 .. https://news.ycombinator.com/item?id=43506198
What about (graphene) superconductors though?
On my info page (https://brandonli.net/semisim/info) there's a list of things my simulation can and can't do. After taking a look at the paper you mentioned, I think simulating it may very well be possible, however it might take a bit of effort. As for graphene, its band structure is different enough that I don't think it would work.
Note that my simulation is intended for educational purposes only, not scientific research.
- Brandon
Thanks, quite the useful simulator; I hadn't found that page yet. Additional considerations for circuit simulators:
What does the simulator say about signal delay and/or propagation in electronic circuits and their fields? How long does it take for a lightbulb to turn on after a switch is thrown, given the length of the circuit and the real distance between points in it?
(I learned this gap in our understanding of electron behavior from this experiment, which had never been done FWIU: "How Electricity Actually Works" (2022) https://www.youtube.com/watch?v=oI_X2cMHNe0 )
FWIW, additionally:
Hall Effect and Quantum Anomalous Hall Effect;
"Tunable superconductivity and Hall effect in a transition metal dichalcogenide" (2025) https://news.ycombinator.com/item?id=43347319
ScholarlyArticle: "Moiré-driven topological electronic crystals in twisted graphene" (2025) https://www.nature.com/articles/s41586-024-08239-6
NewsArticle: "Anomalous Hall crystal made from twisted graphene" (2025) https://physicsworld.com/a/anomalous-hall-crystal-made-from-...
From "Single-chip photonic deep neural network with forward-only training" https://news.ycombinator.com/item?id=42314581 :
"Fractional quantum anomalous Hall effect in multilayer graphene" (2024) https://www.nature.com/articles/s41586-023-07010-7
"Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 .. https://news.ycombinator.com/item?id=39365579
> "Room-temperature quantum coherence of entangled multiexcitons in a metal-organic framework" (2024) https://www.science.org/doi/10.1126/sciadv.adi3147
Electrons (and photons and phonons and other fields of particles) are more complex than that though.
I recreated Veritasium's setup in my simulator and measured the current through the load resistor, the results of which are here: https://imgur.com/a/sxVihf0
The gap between the wires is about 1 micrometer, so light should take about 3 fs to propagate through. The simulation output approximately matches this prediction, and over the first few tens of femtoseconds the current increases, with a jump at around 70 fs due to the reflected wave. All of this is pretty much in line with the results of Veritasium's experiment.
Thanks for bringing it up. I might include this as another example in my sim.
Nice.
These are cool _ wave propagation vids too; Nils Berglund wave visualizations: https://youtu.be/v0cZjOIfwos?si=07w2Wd4dPlGmNxHp
_: photon, fluid, standing transverse,, plasma
What about longitudinal waves in plasma, superconductors, and superfluids though? https://www.google.com/search?q=What+about+longitudinal+wave...
I suppose vorticity doesn't matter that much for classical electronic circuits
Arduino is at work to make bio-based PCBs
From the actual paper: "The resulting yield of PCB production was around 50%. Signal analysis was successful with analogue data acquisition (voltage) and low-frequency (4 kHz) tests, indistinguishable from sample FR4 boards. Eventually, the samples were subjected to highly accelerated stress test (HAST). HAST tests revealed limitations compared to traditional FR4 printed circuit materials. After six cycles, the weight loss was around 30% in the case of PLA/Flax, and as three-point bending tests showed, the possible ultimate strength (25 MPa at a flexural state) was reduced by 80%."[1]
This sort of problem has come up many times with attempts to put some biological filler material into a composite. Most biological materials absorb and release water, and change size and weight as they do. This causes trouble for anything exposed to humidity changes. The classic "hemp/soybean car" ran into this problem.[2] In 1941, plastics were more expensive, and there were attempts to find some cheap material to use as filler. That never got beyond a prototype. Modern attempts at bio-composites seem to hit the same problem.[3]
This might have potential for cheap disposable toys, where expected lifetime is in months and disposal as ordinary trash is desirable.
[1] https://iopscience.iop.org/article/10.1088/1361-6528/ad66d3
> The classic "hemp/soybean car" ran into this problem.[2] In 1941,
Damn, I always thought that Cheech and Chong's hemp car was fiction.
Ford hired George Washington Carver. They heated soybeans to develop a bioplastic.
Soybean car > History, Internet video (of Rollins with a fireman's , Car ingredients: https://en.wikipedia.org/wiki/Soybean_car#Car_ingredients :
> The exact ingredients of the plastic are not known since there were no records kept of the plastic itself. Speculation is that it was a combination of soybeans, wheat, hemp, flax and ramie. Lowell Overly, the person who had the most influence in creating the car, says it was "...soybean fiber in a phenolic resin with formaldehyde used in the impregnation." [16]
What are the binders for aerospace -grade hemp plastic these days? I don't think that formaldehyde is required anymore.
Hempitecture has salt-treated fire retardant hemp batting home insulation product which competes with fiberglass and cellulose batting and fill, and cork.
FWIU treated polyurethane foam (like old seat cushions) absorbs oil (OleoSponge),
Kestrel has a modern vehicle made of hemp plastic.
Name of the 75% hemp aircraft made by Hempearth scientist from Canada doing engineering in the US:
Radar (ROC curve in ML, too) and these days Infrared signatures for hemp vehicles and crafts:
Hemp plastic would have been an advantage in WWII if:
These days many major auto manufacturers use hemp parts in production automobiles for its durability, cost, and sustainability in terms of carbon cost for example.
Hemp bast fiber competes with graphene in ultracapacitor anode applications, and IDK why not normal capacitors and batteries too. Hemp anodes are possibly more sustainable than graphene anodes (in supercapacitors and solid state batteries) due to the environmental and health hazards of graphene production and the relative costs of production.
YouTube has videos of hemp batteries; batteries made of hemp. https://www.youtube.com/results?sp=mAEA&search_query=hemp+ba...
Dimensional Hemp Wood lumber is real, and it is a formaldehye-free sustainable binder FWIU.
So - and this is what Kestrel and Hempearth are going for - it's probably possible to make closer to 100% of a vehicle or an aircraft with biocomposites inspecific or even hemp-only.
> FWIU treated polyurethane foam (like old seat cushions) absorbs oil (OleoSponge),
And Hemp Aerogels are even more oil absorbent than polyurethane foam.
"Hemp plastic door panel sledgehammer test"; History Channel: https://youtube.com/watch?v=Hx8OTH0eEM0&
Re: dandelion (taraxagum) rubber instead of synthetic rubber (plastic) https://news.ycombinator.com/item?id=40892109
Graphene is free when you flash heat unsorted recycled plastic and sell or use the Hydrogen.
Graphene can be produced from CO2.
CO2 is overly-abundant and present in emissions that need to be filtered anyway.
What types of graphene and other forms of carbon do not conduct electricity, are biodegradable , and would be usable as a graphene PCB for semiconductors and superconductors?
Graphene Oxide (low cost of production), Graphane (hydrogen; high cost of production), Diamond (lowering cost of production, also useful for NV QC nitrogen-vacancy quantum computing; probably in part due to the resistivity of the molecular lattice),
How could graphene oxide PCBs be made fire-proof?
Non-Conductive Flame Retardants: phosphorous, nitrogen (melamine,), intumescent systems, inorganic fillers
Is there a bio-based flame-retardant organic filler for [Graphene Oxide] PCBs?
Greenwashing - this kind of idea has been floating around for years and I don’t think it’s really that big of a problem
No, we have environmentally and financially unsustainable supply chain dependencies on silicon-grade sand and other gases and minerals.
PCBs are not biodegradable but could be. What is the problem?
You haven't pointed out anything specific to FR4, which is what this would be replacing. This is merely a ploy at getting funding, and I'm very skeptical about it because I've seen 2 or 3 companies do the exact same pitch and fail before.
> The goal: to design and test bio-based multilayer PCBs that reduce environmental impact, without compromising on functionality or performance.
What about cost?
And so instead,
What is a sustainable flame retardant for Graphene Oxide PCBs; and is that a filler?
"Study of properties of graphene oxide nanoparticles obtained by laser ablation from banana, mango, and tangerine peels" (2025) https://www.sciencedirect.com/science/article/pii/S266697812...
LLMs get lost in multi-turn conversation
Humans also often get lost in multi-turn conversation.
I have experienced that in person many, many times. Jumps in context that seem easy for one person to follow, but very hard for others.
So, assuming the paper is legit (arxiv, you never know...), its more like something that could be improved than a difference from human beings.
Subjectively the "getting lost" feels totally different than human conversations. Once there is something bad in the context it seems almost impossible to get back on track. All subsequent responses become get a lot worse and it starts contradicting itself. It is possible that with more training this problem can be improved, but what is interesting to me isn't it's worse than humans in this way but that this sort of difficulty scales differently than it does in humans. I would love to get some more objective descriptions of these subjective notions.
Contradictions are normal. Humans make them all the time. They're even easy to induce, due to the simplistic nature of our communication (lots of ambiguities, semantic disputes, etc).
I don't see how that's a problem.
Subjectivity is part of human communication.
Algorithmic convergence and caching :: Consensus in conversational human communication
Any sufficiently large amount of information exchange could be interpreted as computational if you see it as separated parts. It doesn't mean that it is intrinsically computational.
Seeing human interactions as computer-like is a side effect of our most recent shiny toy. In the last century, people saw everything as gears and pulleys. All of these perspectives are essentially the same reductionist thinking, recycled over and over again.
We've seen men promising that they would build a gear-man, resurrect the dead with electricity, and all sorts of (now) crazy talk. People believed it for some time.
If data integrity is assured, and thus there is no change in the data to store/transfer, then that's the opposite of computationally transforming the data?
How do we see robot and AI and helping interactions in film and tv and games?
A curated list of films for consideration:
Mary Shelley's "Frankenstein" or "The Modern Prometheus" (1818), Metropolis (1927), I\, Robot (1940-1950; Three Laws of Robotics, robopsychology), Macy Conferences (1941-1960; Cybernetics), Tobor the Great (1954), Here Comes Tobor (1956), Jetsons' maid's name: Rosie (1962), Lost in Space (1965), 2001: A Space Odyssey (1968), THX 1138 (1971), Star Wars (1977), Terminator (1984), Driving Miss Daisy (1989), Edward Scissorhands (1990), Flubber (1997, 1961), Futurama (TV, 1999-), Star Wars: Phantom Menace (1999), The Iron Giant (1999), Bicentennial Man (1999), A.I. Artificial Intelligence (2001), Minority Report (2003), I\, Robot (2004), Team America: World Police (2004), Wall-E (2008), Iron Man (2008), Eagle Eye (2008), Moon (2009), Surrogates (2009), Tron: Legacy (2010), Hugo (2011), Django Unchained (2012), Her (2013), Transcendence (2014), Chappie (2015), Tomorrowland (2015), The Wild Robot (2016, 2024), Ghost in the Shell (2017),
Giant f robots: Gundam (1979), Transformers (TV: 1984-1987, 2007-), Voltron (1984-1985), MechWarrior (1989), Matrix II: Revolutions (2003), Avatar (2009, 2022, 2025), Pacific Rim (2013-), RoboCop (1987, 2014), Edge of Tomorrow (2014),
~AI vehicle: Herbie, The Love Bug (1968-), Knight Rider (TV, 1982-1986), Thunder in Paradise (TV, 1993-95), Heat Vision and Jack (1999), Transformers (2007), Bumblebee (2018)
Games: Portal (2007), LEGO Bricktales (2022), While True: learn() (2018), "NPC" Non-Player Character
Category:Films_about_artificial_intelligence : https://en.wikipedia.org/wiki/Category:Films_about_artificia...
List of artificial intelligence films: https://en.wikipedia.org/wiki/List_of_artificial_intelligenc...
Category:Films_about_robots: https://en.wikipedia.org/wiki/Category:Films_about_robots
Category:American_robot_films: https://en.wikipedia.org/wiki/Category:American_robot_films
Objcurses – ncurses 3D object viewer using ASCII in console
How does objcurses compare to display3d in implementation and features?
Textual has neat shell control characters for CLI utilities that might be useful.
FSV is an open clone of FSN (the 3d file browser from Jurassic Park), but it requires OpenGL.
Pretty cool! I honestly hadn’t seen display3d before, not when I was researching similar projects, nor while working on my own and debugging issues. Just checked it out now, and as someone currently learning Rust, I really liked it and definitely starred the repo. Love the Unicode rendering idea.
Textual looks fun too, feels very much like a Python equivalent of ratatui from Rust, I also has a project with this library. Definitely something I might explore for building overlays or adding interactive controls around the core renderer, though curses also can render basic buttons and menus.
As for FSV, yeah, that’s more in the OpenGL/GPU territory. My goal was to stay purely terminal-based. By the way, I wasn’t sure if you brought up FSV just for the retro-3D vibe comparison, or if you had something more specific in mind? Curious what you meant there
Just found display3d today, too.
Maybe it was an ascii CLI video of a 3d scene that I remember seeing.
Maybe molecule visualizations? Is it possible to discern handedness from an objcurses render of a molecule like sugar or an amino protein?
Could a 3D CLI file browsing interface useful enough for a computer green scren in a movie like Jurassic Park or Hackers be built with objcurses? wgpu compiles to WASM and WebGL
Oracle VM VirtualBox – VM Escape via VGA Device
For the record: Oracle does not consider that the 3D feature should be enabled when the VM is untrusted. It's still classified as experimental and will likely be so for another decade at least.
https://news.ycombinator.com/item?id=43067347 :
> Still hoping for SR-IOV in retail GPUs.
> Not sure about vCPU functionality in GPUs
> Process isolation on vCPUs with or without SR-IOV is probably not as advanced as secure enclave approaches
[Which just fell to post-spectre side channels]
>> Is there sufficient process isolation in GPUs?
/? Sr-iov iommu: https://www.google.com/search?q=sr-iov+iommu
Is there branch prediction in GPUs? What about other side channels between insufficiently-isolated GPU processes?
I see that vgpu_unlock no longer works for technical reasons.
The first year of free-threaded Python
> Instead, many reach for multiprocessing, but spawning processes is expensive
Agreed.
> and communicating across processes often requires making expensive copies of data
SharedMemory [0] exists. Never understood why this isn’t used more frequently. There’s even a ShareableList which does exactly what it sounds like, and is awesome.
[0]: https://docs.python.org/3/library/multiprocessing.shared_mem...
Spawning processes generally takes much less than 1 ms on Unix
Spawning a PYTHON interpreter process might take 30 ms to 300 ms before you get to main(), depending on the number of imports
It's 1 to 2 orders of magnitude difference, so it's worth being precise
This is a fallacy with say CGI. A CGI in C, Rust, or Go works perfectly well.
e.g. sqlite.org runs with a process PER REQUEST - https://news.ycombinator.com/item?id=3036124
>Spawning a PYTHON interpreter process might take 30 ms to 300 ms
Which is why, at least on Linux, Python's multiprocessing doesn't do that but fork()s the interpreter, which takes low-single-digit ms as well.
Even when the 'spawn' strategy is used (default on Windows, and can be chosen explicitly on Linux), the overhead can largely be avoided. (Why choose it on Linux? Apparently forking can cause problems if you also use threads.) Python imports can be deferred (`import` is a statement, not a compiler or pre-processor directive), and child processes (regardless of the creation strategy) name the main module as `__mp_main__` rather than `__main__`, allowing the programmer to distinguish. (Being able to distinguish is of course necessary here, to avoid making a fork bomb - since the top-level code runs automatically and `if __name__ == '__main__':` is normally top-level code.)
But also keep in mind that cleanup for a Python process also takes time, which is harder to trace.
Refs:
https://docs.python.org/3/library/multiprocessing.html#conte... https://stackoverflow.com/questions/72497140
I really wish Python had a way to annotate things you don't care about cleaning up. I don't know what the API would look like, but I imagine something like:
l = list(cleanup=False)
for i in range(1_000_000_000): l.append(i)
telling the runtime that we don't need to individually GC each of those tiny objects and just let the OS's process model free the whole thing at once.Sure, close TCP connections before you kill the whole thing. I couldn't care less about most objects, though.
There's already a global:
import gc
gc.disable()
So I imagine putting more in there to remove objects from the tracking.That can go a long way, so long as you remember to manually GC the handful of things you do care about.
Is there a good way to add __del__() methods or to wrap Context Manager __enter__()/__exit__() methods around objects that never needed them because of the gc?
Hadn't seen this:
import gc
gc.disable()
Cython has __dealloc__() instead of __del__()?Also, there's a recent proposal to add explicit resource management to JS: "JavaScript's New Superpower: Explicit Resource Management" https://news.ycombinator.com/item?id=44012227
Large Language Models Are More Persuasive Than Incentivized Human Persuaders
> Abstract: [...] Overall, our findings suggest that AI's persuasion capabilities already exceed those of humans that have real-money bonuses tied to performance. Our findings of increasingly capable AI persuaders thus underscore the urgency of emerging alignment and governance frameworks.
P.22:
> 4. Implications for AI Regulation and Ethical Considerations
Getting AI to write good SQL
From "Show HN: We open sourced our entire text-to-SQL product" (2024) https://news.ycombinator.com/item?id=40456236 :
> awesome-Text2SQL: https://github.com/eosphoros-ai/Awesome-Text2SQL
> Awesome-code-llm > Benchmarks > Text to SQL: https://github.com/codefuse-ai/Awesome-Code-LLM#text-to-sql
Zinc Microcapacitors Are the Best of Both Worlds
Why Zinc if Carbon is sufficient?
From "Eco-friendly artificial muscle fibers can produce and store energy" https://news.ycombinator.com/item?id=42942421 :
> "Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
>> 583 Wh/kg
That's with mechanical twisting though; graphene supercapacitors in general have lower energy density than (micro-) capacitors?
Show HN: SQL-tString a t-string SQL builder in Python
SQL-tString is a SQL builder that utilises the recently accepted PEP-750, https://peps.python.org/pep-0750/, t-strings to build SQL queries, for example,
from sql_tstring import sql
val = 2
query, values = sql(t"SELECT x FROM y WHERE x = {val}")
assert query == "SELECT x FROM y WHERE x = ?"
assert values == [2]
db.execute(query, values) # Most DB engines support this
The placeholder ? protects against SQL injection, but cannot be used everywhere. For example, a column name cannot be a placeholder. If you try this SQL-tString will raise an error, col = "x"
sql(t"SELECT {col} FROM y") # Raises ValueError
To proceed you'll need to declare what the valid values of col can be, from sql_tstring import sql_context
with sql_context(columns="x"):
query, values = sql(t"SELECT {col} FROM y")
assert query == "SELECT x FROM y"
assert values == []
Thus allowing you to protect against SQL injection.As t-strings are format strings you can safely format the literals you'd like to pass as variables,
text = "world"
query, values = sql(t"SELECT x FROM y WHERE x LIKE '%{text}'")
assert query == "SELECT x FROM y WHERE x LIKE ?"
assert values == ["%world"]
This is especially useful when used with the Absent rewriting value.SQL-tString is a SQL builder and as such you can use special RewritingValues to alter and build the query you want at runtime. This is best shown by considering a query you sometimes want to search by one column a, sometimes by b, and sometimes both,
def search(
*,
a: str | AbsentType = Absent,
b: str | AbsentType = Absent
) -> tuple[str, list[str]]:
return sql(t"SELECT x FROM y WHERE a = {a} AND b = {b}")
assert search() == "SELECT x FROM y", []
assert search(a="hello") == "SELECT x FROM y WHERE a = ?", ["hello"]
assert search(b="world") == "SELECT x FROM y WHERE b = ?", ["world"]
assert search(a="hello", b="world") == (
"SELECT x FROM y WHERE a = ? AND b = ?", ["hello", "world"]
)
Specifically Absent (which is an alias of RewritingValue.ABSENT) will remove the expression it is present in, and if there an no expressions left after the removal it will also remove the clause.The other rewriting values I've included are handle the frustrating case of comparing to NULL, for example the following is valid but won't work as you'd likely expect,
optional = None
sql(t"SELECT x FROM y WHERE x = {optional}")
Instead you can use IsNull to achieve the right result, from sql_tstring import IsNull
optional = IsNull
query, values = sql(t"SELECT x FROM y WHERE x = {optional}")
assert query == "SELECT x FROM y WHERE x IS NULL"
assert values == []
There is also a IsNotNull for the negated comparison.The final feature allows for complex query building by nesting a t-string within the existing,
inner = t"x = 'a'"
query, _ = sql(t"SELECT x FROM y WHERE {inner}")
assert query == "SELECT x FROM y WHERE x = 'a'"
This library can be used today without Python3.14's t-strings with some limitations, https://github.com/pgjones/sql-tstring?tab=readme-ov-file#pr..., and I've been doing so this year. Thoughts and feedback very welcome.Just took a quick look, and it seams like the parser is hand written which is great, but you probably want to build a lexer and parser based on the BNF grammar take a look at how I do it here https://github.com/elixir-dbvisor/sql/tree/main/lib and do conformance testing with https://github.com/elliotchance/sqltest
Thanks, do you have a reference for SQL grammar - I've had no success finding an official source.
Ibis has sqlglot for parsing and rewriting SQL query graphs; and there's sql-to-ibis: https://github.com/ibis-project/ibis/issues/9529
sqlglot: https://github.com/tobymao/sqlglot :
> SQLGlot is a no-dependency SQL parser, transpiler, optimizer, and engine [written in Python]. It can be used to format SQL or translate between 24 different dialects like DuckDB, Presto / Trino, Spark / Databricks, Snowflake, and BigQuery. It aims to read a wide variety of SQL inputs and output syntactically and semantically correct SQL in the targeted dialects.
Open Hardware Ethernet Switch project, part 1
There are 48+2 port switches with OpenWRT support.
Re: initial specs for the (4 port) OpenWRT One, which is built on Banana Pi's, which supports U-boot: https://www.cnx-software.com/2024/01/12/openwrt-one-ap-24-xy... .. https://openwrt.org/toh/openwrt/one:
> The non-open-source components include the 2.5GbE PHY and WiFi firmware with blobs running on separate cores that are independent of the main SoC where OpenWrt is running. The DRAM calibration routines are closed-source binaries as well.
Software for FPGA switch, probe, and GHz oscilloscope projects?
/? inurl:awesome vivado https://www.google.com/search?q=inurl%3Aawesome+vivado :
awesome-hdl: https://github.com/drom/awesome-hdl :
sphinx-hwt:
d3-wave probably won't do GHz in realtime. https://github.com/Nic30/d3-wave
Pyqtgraph probably can't realtime plot GHz probe data without resampling either?
pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
The hwtLib README says Vivado supports IP-XACT format.
hwtLib: https://github.com/Nic30/hwtLib :
> hwtLib is the library of hardware components writen using hwt library. Any component can be exported as Xilinx Vivado (IP-exact) or Quartus IPcore using IpPackager or as raw Verilog / VHDL / SystemC code and constraints by to_rtl() function. Target language is specified by keyword parameter serializer.
IP-XACT: https://en.wikipedia.org/wiki/IP-XACT
hwtlib docs > hwtLib.peripheral.ethernet package: https://hwtlib.readthedocs.io/en/latest/hwtLib.peripheral.et...
hwtLib.peripheral.uart package: https://hwtlib.readthedocs.io/en/latest/hwtLib.peripheral.ua...
It looks like there are CRC implementations in hwtlib. Which CRC or hash does U-boot use for firmware flashing? https://www.google.com/search?q=Which+CRC+or+hash+does+U-boo... ... Looks like CRC32 like .zip files but not .tar.gz files.
U-boot: https://github.com/u-boot/u-boot
OpenWRT docs > "Failsafe mode, factory reset, and recovery mode": https://openwrt.org/docs/guide-user/troubleshooting/failsafe...
Open vSwitch: https://en.wikipedia.org/wiki/Open_vSwitch :
> Open vSwitch can operate both as a software-based network switch running within a virtual machine (VM) hypervisor, and as the control stack for dedicated switching hardware; as a result, it has been ported to multiple virtualization platforms, switching chipsets, and networking hardware accelerators.[7]
"Porting Open vSwitch to New Software or Hardware": https://docs.openvswitch.org/en/latest/topics/porting/
awesome-open-source-hardware: https://github.com/aolofsson/awesome-opensource-hardware
awesome-open-hardware: https://github.com/delftopenhardware/awesome-open-hardware :
> Journal of Open Hardware (JOH), HardwareX Journal,
There are also xilinx (now AMD) FPGA modules in hwtlib:
hwtLib.xilinx package: https://hwtlib.readthedocs.io/en/latest/hwtLib.xilinx.html#
New stainless steel pulls green hydrogen directly out of seawater
... NewsArticle: "New ultra stainless steel for hydrogen production" ; SS-H2 https://www.sciencedaily.com/releases/2023/11/231117102539.h...
ScholarlyArticle: "A sequential dual-passivation strategy for designing stainless steel used above water oxidation." (2023) https://www.sciencedirect.com/science/article/abs/pii/S13697... .. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22...
SS-H2 resists Chloride ions (in saltwater) with Manganese and Chromium in order to maintain hydrolysis.
Aluminum requires Gallium to maintain hydrolysis FWIU. Is there a laser or other treatment of aluminum to achieve the same effect as Gallium? Could low voltage be required to maintain hydrolysis; like an air brake?
Beyond qubits: Meet the qutrit (and ququart)
ScholarlyArticle: "Quantum error correction of qudits beyond break-even" (2025) https://www.nature.com/articles/s41586-025-08899-y
"Logical states for fault-tolerant quantum computation with propagating light" (2024) https://www.science.org/doi/10.1126/science.adk7560 .. "A physical qubit with built-in error correction" https://news.ycombinator.com/item?id=39243929
"Layer codes" (2024) https://www.nature.com/articles/s41467-024-53881-3 .. https://news.ycombinator.com/item?id=42264340 ; 3D layer codes now instead of 2D surface codes
From https://news.ycombinator.com/item?id=42723063 :
> /? How can fractional quantum hall effect be used for quantum computing https://www.google.com/search?q=How+can+a+fractional+quantum...
>> Non-Abelian Anyons, Majorana Fermions are their own anti particles, Topologically protected entanglement [has lower or no error and thus less need for QEC quantum error correction]
Additive manufacturing of zinc biomaterials for biodegradable in vivo use
NewsArticle: "Additive manufacturing of zinc biomaterials opens new possibilities for biodegradable medical implants" (2025) https://3dprintingindustry.com/news/additive-manufacturing-o...
Ultrasound deep tissue in vivo sound printing
NewsArticle: "Using Ultrasound to Print Inside the Body: Caltech Unveils Deep Tissue In Vivo Sound Printing Technique" (2025) https://3dprintingindustry.com/news/using-ultrasound-to-prin...
The world could run on older hardware if software optimization was a priority
Code bloat: https://en.wikipedia.org/wiki/Code_bloat
Software bloat > Causes: https://en.wikipedia.org/wiki/Software_bloat#Causes
Program optimization > Automated and manual optimization: https://en.wikipedia.org/wiki/Program_optimization#Automated...
Making PyPI's test suite faster
I get that pytest has features that unittest does not, but how is scanning for test files in a directory considered appropriate for what is called a high security application in the article?
For high security applications the test suite should be boring and straightforward. pytest is full of magic, which makes it so slow.
Python in general has become so complex, informally specified and bug ridden that it only survives because of AI while silencing critics in their bubble.
The complexity includes PSF development processes, which lead to:
https://www.schneier.com/blog/archives/2024/08/leaked-github...
strace is one way to determine how many stat calls a process makes.
Developers avoid refactoring costs by using dependency inversion, fixtures and functional test assertions without OO in the tests, too.
Pytest collection could be made faster with ripgrep and does it even need AST? A thread here mentions how it's possible to prepare a list of .py test files containing functions that start with "test_" to pass to the `pytest -k` option; for example with ripgrep.
One day I did too much work refactoring tests to minimize maintenance burden and wrote myself a functional test runner that captures AssertionErrors and outputs with stdlib only.
It's possible to use unittest.TestCase() assertion methods functionally:
assert 0 == 1
# AssertionError
import unittest
test = unittest.TestCase()
test.assertEqual(0, 1)
# AssertionError: 0 != 1
unittest.TestCase assertion methods
have default error messages, but the `assert` keyword does not.In order to support one file stdlib-only modules, I have mocked pytest.mark.parametrize a number of times.
chmp/ipytest is one way to transform `assert a == b` to `assertEqual(a,b)` like Pytest in Jupyter notebooks.
Python continues to top language use and popularity benchmarks.
Python is not a formally specified language, mostly does not have constant time operations (or documented complexity in docstring attrs), has a stackless variant, supported asynchronous coroutines natively before C++, now has some tail-call optimization in 3.14, now has nogil mode, and is GPU accelerated in many different ways.
How best could they scan for API tokens committed to public repos?
Google launches 'implicit caching' to make accessing latest AI models cheaper
Progress toward fusion energy gain as measured against the Lawson criteria
It should be noted that "breakeven" is often misleading.
There's "breakeven" as in "the reaction produces more energy than put into it", and there's breakeven as in "the entire reactor system produces more energy than put into it", which isn't quite the same thing.
In the laser business, the latter is called "wall plug efficiency," which is laser power out per electrical power in.
"Uptime Percentage", "Operational Availability" (OA), "Duty Cycle"
Availability (reliability engineering) https://en.wikipedia.org/wiki/Availability
Terms from other types of work: kilowatt/hour (kWh), Weight per rep, number of reps, Total Time Under Tension
Mass spectrometry method identifies pathogens within minutes instead of days
Would be interesting how much resolution they need for the diagnosis to be reliable.
Because high resolution mass spectrometers cost millions of dollars, and "minutes" for a diagnosis can mean that one spectrometer can only run 3 samples per hour - or 72 per day.
And while a research university can afford a million dollar spectrometer (and the grad students that run it), even a small hospital will create 72 bacterial swaps per hour - while absolutely not having the money to get 10 spectrometers with the corresponding technicians.
And the incumbent/competitor - standard bacterial cultures - is cheap!
MRI machines cost up to 0.5 million, and take more than minutes for a scan. So this is in the upper realm of reasonable. At this point it is an engineering problem to get costs down.
There are 0.05 Tesla MRI machines that almost work with a normal 15A 110V outlet now FWIU; https://news.ycombinator.com/item?id=40965068 :
> "Whole-body magnetic resonance imaging at 0.05 Tesla" [1800W] https://www.science.org/doi/10.1126/science.adm7168 .. https://news.ycombinator.com/item?id=40335170
Other emerging developments in __ spectroscopy:
/?hnlog Spectro:
NIRS;
> Are there implied molecular structures that can be inferred from low-cost {NIRS, Light field, [...]} sensor data?
NIRS would be low cost, but the wavelength compared to the sample size.
From https://news.ycombinator.com/item?id=38528844 :
> "Reversible optical data storage below the diffraction limit (2023)" [at cryogenic temperatures] https://news.ycombinator.com/item?id=38528844 :
> [...] have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
"Eye-safe laser technology to diagnose traumatic brain injury in minutes" https://news.ycombinator.com/item?id=38510092 :
> "Window into the mind: Advanced handheld spectroscopic eye-safe technology for point-of-care neurodiagnostic" (2023) https://www.science.org/doi/10.1126/sciadv.adg5431
> multiplex resonance Raman spectroscopy
Holotomographic imaging is yet another imaging method that could be less costly than MRI; https://news.ycombinator.com/item?id=40819864
"Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054 :
> "Terahertz spectroscopy of collective charge density wave dynamics at the atomic scale" (2024) https://www.nature.com/articles/s41567-024-02552-7
Microservices are a tax your startup probably can't afford
> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains. Before that? You’re paying the price without getting the benefit: duplicated infra, fragile local setups, and slow iteration. For example, Segment eventually reversed their microservice split for this exact reason — too much cost, not enough value.
Basically this. Microservices are a design pattern for organisations as opposed to technology. Sounds wrong but the technology change should follow the organisational breakout into multiple teams delivering separate products or features. And this isn't a first step. You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs e.g pdf creation is often a background task because of how long it takes to produce. Anyway after that you might end up with more services and then you have this sprawl of things where you start to think about standardisation, architecture patterns, etc. Before that it's a death sentence and if your business survives I'd argue it didn't because of microservices but inspite of them. The dev time lost in the beginning, say sub 200 engineers is significant.
Some resume driven developers will choose microservices for startups as a way to LARP a future megacorp job. Startup may fail, but they at least got some distributed system experience. It takes extremely savvy technical leadership to prevent this.
In my experience, it seems the majority of folks know the pitfalls of microservices, and have since like... 2016? Maybe I'm just blessed to have been at places with good engineering, technical leadership, and places that took my advice seriously, but I feel like the majority of folks I've interacted with all have experienced some horror story with microservices that they don't want to repeat.
Does [self-hosted, multi-tenant] serverless achieve similar separation of concerns in comparison to microservices?
Should the URLs contain a version; like /api/v1/ ?
FWIU OpenAPI API schema enable e.g. MCP service discovery, but not multi-API workflows or orchestrations.
(Edit: "The Arazzo Specification - A Tapestry for Deterministic API Workflows" by OpenAPI; src: https://github.com/OAI/Arazzo-Specification .. spec: https://spec.openapis.org/arazzo/latest.html (TIL by using this comment as a prompt))
People are losing loved ones to AI-fueled spiritual fantasies
Have we invited Wormwood to counsel us? To speak misdirected or even malignant advice that we readily absorb?
An LLM trained on all other science before Copernicus or Galileo would be expected to explain as true that the world is the flat center of the universe.
The idea that people in medieval times believed in a flat Earth is a myth that was invented in the 1800s. See https://en.wikipedia.org/wiki/Myth_of_the_flat_Earth for more.
Galileo Galilei: https://en.wikipedia.org/wiki/Galileo_Galilei :
> Galileo's championing of Copernican heliocentrism was met with opposition
... By the most published majority, whose texts would've been used to train science LLMs at the time back then.
And that most published majority believed in the Ptolemaic model. Which as https://en.wikipedia.org/wiki/Geocentric_model#Ptolemaic_mod... says:
> Ptolemy argued that the Earth was a sphere in the center of the universe...
Note. Spherical Earth. Not flat.
But could the Greeks sail?
Did ancient (Eastern?) Jacob's Staff surveying and navigation methods account for the curvature of the earth? https://www.google.com/search?q=Did%20ancient%20(Eastern%3F)... :
- History of geodesy: https://en.wikipedia.org/wiki/History_of_geodesy
FWIU Egyptian sails are Phoenician in origin.
The criticism being directed at this comment because in fact most pre-Copernican scholarship believed the world spherical is missing the point.
Musk's xAI in Memphis: 35 gas turbines, no air pollution permits
>methane gas turbines
>nitrogen oxides
Can someone explain to me how those produce nitrogen oxide? High school chem taught me CH4 + 2-O2 -> CO2 + 2-H2O... where's the nitrogen oxide?
The turbine produces high temperatures which then causes reactions with the nitrogen that's present in the atmosphere. There's also nitrogen present in the fuel. It's not 100% pure methane.
Does AGR Acidic Gas Reduction work with methane turbines?
Shouldn't all methane-powered equipment have this AGR (or similar) new emission reduction technology?
From https://www.ornl.gov/news/add-device-makes-home-furnaces-cle... :
> ORNL’s award-winning ultraclean condensing high-efficiency natural gas furnace features an affordable add-on technology that can remove more than 99.9% of acidic gases and other emissions. The technology can also be added to other natural gas-driven equipment.
FWIU basically no generators have catalytic converters, because that requires computer controlled fuel ignition.
...
FWIU, in data centers, "100% Green" means "100% offset by PPAs" (power-purchase agreement); so "200% green" could mean "100% directly-sourced clean energy".
Should they pack up and pay out and start over elsewhere with enough clean energy, [ or should force methane generators to comply? ]
It sounds like that's what they're adding for their permanent generators. Unfortunate that their portable "temporary" generators weren't equipped with this technology.
Though it's not a methane leak, in their case the problem is the poisonous byproducts of combustion FWIU;
Looks like methane.jpl.nasa.gov isn't up anymore? https://methane.jpl.nasa.gov/
We invested tax dollars in NASA space-based methane leak imaging, because methane is a potent greenhouse gas.
Is this another casualty of their distracting, wasteful, and saboteurial hit on NASA Earth Science funding?
I decided to pay off a school’s lunch debt
On the off chance you're interested in school lunches I highly recommend watching videos of Japanese school lunches on YouTube. There's a bunch out there now and if you were raised in the American system it will probably blow your mind. The idea that lunches can be freshly made, on site, out of healthy ingredients and children are active participants in serving and cleaning up is just crazy. When I encountered it for the first time I felt like a big part of my childhood had been sold to the lowest bidder.
> The idea that lunches can be freshly made, on site, out of healthy ingredients and children
Excellent garden path sentence.
A friend who's a pre-school teacher has this excellent t-shirt (I LMAO the first time I saw it):
let's eat, kids.
let's eat kids.
punctuation saves lives.
https://m.media-amazon.com/images/I/B1pppR4gVKL._CLa%7C2140%...
"Eats, Shoots & Leaves" https://en.wikipedia.org/wiki/Eats,_Shoots_%26_Leaves
A new hairlike electrode for long-term, high-quality EEG monitoring
This is actually an improvement on the lead electrode technology, making them smaller and improving the scalp adhesion for better fidelity. Ostensibly, you would still need an array of them for medical diagnoses, like isolating and/or monitoring a seizure to a particular portion of the brain.
Isn't that circular pad the electrode, and the "hair" just the lead which can be replaced by any copper wire?
Terrible headline. The single hair-like electrode outperforms the connection performance (longevity/signal to noise) of a single electrode from a 21-lead EEG.
It's not just the headline. "... a single electrode that looks just like a strand of hair and is more reliable than the standard, multi-electrode version." "The researchers tested the device’s long-term adhesion and electrical performance and compared it to the current, standard EEG using multiple electrodes."
I read the story three times and I'm still confused. But I'm sure you're right, and I think it's the author who's confused.
My second mental image was an alligator clip connected to a hair on a person’s head.
https://www.psu.edu/news/research/story/future-brain-activit...
Better link from Penn State. My reading of this seems to suggest that these electrodes are better than the standard one, NOT that one electrode is better than 24 leads.
OK, thanks, we’ve changed the URL to this from https://newatlas.com/medical-devices/3d-printed-hairlike-eeg....
[flagged]
That's not what an EEG is, and they have been around for decades.
Did they read and write brain signals with AI? What would the goal of that even be?
Presuming a higher bandwidth interface than what we currently have (voice/text chat).
Many Black Mirror episodes explore this.
The brain machine interface concept seems very useful. My question is the AI part. Aside from the machine learning likely needed to decode brain signals meaningfully at all, why would we want to hook up something with any resemblance to current AI to a brain directly?
The current AI stuff can easily be described as a breakthrough in natural language interfaces. A field engineers have been working on for many years. It easy to imagine that the methods used to develop current AI could be used for different types of interfaces we've been stuck on.
No
Nitrogen runoff leads to preventable health outcomes, experts say (2021)
/? Nitrogen: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... :
- "Discovery of nitrogen-fixing corn variety could reduce need for added fertilizer" (2018) https://news.ycombinator.com/item?id=17721741
I should have loved biology too
I should write a blog post entitled "I should have loved computer science"
Do you do bioinformatics?
Bioinformatics: https://en.wikipedia.org/wiki/Bioinformatics
Health informatics: https://en.wikipedia.org/wiki/Health_informatics
Genetic algorithm: https://en.wikipedia.org/wiki/Genetic_algorithm :
> Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation.
AP®/College Biology: https://www.khanacademy.org/science/ap-biology
AP®/College Biology > Unit 6: Gene Expression and Regulation > Lesson 6: Mutations: https://www.khanacademy.org/science/ap-biology/gene-expressi...
AP®/College Biology > Unit 7: Natural selection: https://www.khanacademy.org/science/ap-biology/natural-selec...
Rosalind.info has free CS algorithms applied bioinformatics exercises in Python; in a tree or a list; including genetic combinatorics. https://rosalind.info/problems/list-view/
FWICS there is not a "GA with code exercise" in the AP Bio or Rosalind curricula.
YouTube has videos of simulated humanoids learning to walk with mujoco and genetic algorithms that demonstrate goal-based genetic programming with Cost / Error / Fitness / Survival functions.
Mutating source code AST is a bit different from mutating to optimize a defined optimization problem with specific parameters; though the task is basically the same: minimize error between input and output, and then XAI.
Trump says Harvard will lose tax exempt status
What prevents them from instead justifying tax-exempt status as a faith-based nonprofit for their FSM-related charitable work, say?
I think that assumes that there's some fair evaluating of tax status going on.
Harvard defied Trump and that's why they're making this push. I don't think any given argument to the feds will change that math.
If you work for the feds, I think you know that if you were to follow a process and find that Harvard should retain their tax exempt status then you won't have a job long.
Show HN: Frecenfile – Instantly rank Git files by edit activity
frecenfile is a tiny CLI tool written in Rust that analyzes your Git commit history in order to identify "hot" or "trending" files using a frecency score that incorporates both the frequency and recency of edits.
It is fast enough to get an instant response in most repositories, and you will get a reasonably fast response in practically any repository.
It can be useful for getting a list of "recent"/important files when Git history is the only "usage" history you have available.
Felix86: Run x86-64 programs on RISC-V Linux
What remaining challenges are you most interested in solving for felix86—GPU driver support, full 32-bit compatibility, better Wine integration, or something else entirely?
The felix86 compatibility list also lists SuperTux and SuperTuxCart.
"lsteamclient: Add support for ARM64." https://github.com/ValveSoftware/Proton/commit/8ff40aad6ef00... .. https://news.ycombinator.com/item?id=43847860
/? box86: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
"New box86 v0.3.2 and Box64 v0.2.4 released – RISC-V and WoW64 support" (2023) https://news.ycombinator.com/item?id=37197074
/? box64 is:pr RISC-V is:closed: https://github.com/ptitSeb/box64/pulls?q=is%3Apr+risc-v+is%3...
Wikipedia says it will use AI, but not to replace human volunteers
>Scaling the onboarding of new Wikipedia volunteers
could be pretty helpful. I edit a bit and it's confusing in a number of ways, some I still haven't got the hang of. There's very little "thank you for your contribution but it needs a better source - why not this?" and usually just your work reverted without thanks or much explanation.
Could AI sift through removals and score as biased or vandalist?
And then what to do about "original research" that should've been moved to a different platform or better (also with community review) instead of being deleted?
Wikipedia:No_original_research: https://en.wikipedia.org/wiki/Wikipedia:No_original_research #Using_sources
I'm guessing it could advise about that even if it didn't make decisions.
Healthy soil is the hidden ingredient
I'm a gardening and landscaping enjoyer, but I am constantly confused about the bordering magical thinking surrounding dirt, among other aspects of growing things.
If you look at hydroponics/aeroponics, plants basically need water, light, and fertilizer (N (nitrogen) P (phosphorous) K (potassium), and a few trace minerals). It can be the most synthetic process you've ever seen, and the plants will grow amazingly well.
The other elements regarding soil health, etc, would be much better framed in another way, rather than as directly necessary for plant health. The benefits of maintaining a nice living soil is that it makes the environment self-sustaining. You could just dump synthetic fertilizer on the plant, with some soil additives to help retain the right amount of drainage/retention, and it would do completely fine. But without constant optimal inputs, the plants would die.
If you cultivate a nice soil, such that the plants own/surrounding detritus can be broken down effectively, such that the nutrients in the natural processes can be broken down and made available to the plant, and the otherwise nonoptimal soil texture characteristics could be brought to some positive characteristics by those same processes, then you can theoretically arrive at a point that requires very few additional inputs.
sure, we can make them grow well in a lab. but a natural system is so much simpler and elegant
Plants absorb nitrogen and CO2 from the air and store it in their roots; plants fertilize soil.
If you only grow plants with externally-sourced nutrients, that is neither sustainable nor permaculture.
Though it may be more efficient to grow without soil; soil depletion isn't prevented by production processes that do not generate topsoil.
JADAM is a system developed by a chemicals engineer given what is observed to work in JNF/KNF. https://news.ycombinator.com/item?id=38527264
Where do soil amendments come from, and what would deplete those stocks (with consideration for soil depletion)?
(Also, there are extremely efficient ammonia/nitrogen fertilizer generators, but still then the algae due to runoff problem. FWIU we should we asks ng farmers to Please produce granulated fertilizer instead of liquid.)
The new biofuel subsidies require no-till farming practices; which other countries are further along at implementing (in or to prevent or reverse soil depletion).
Tilling turns topsoil to dirt due to loss of moisture, oxidation, and solar radiation.
The vast majority of plants do not absorb nitrogen from the air. Legumes are the well-known exception.
I think that's why it's good to rotate beans or plant clover cover crop.
Three Sisters: Corn, Beans, Squash: https://en.wikipedia.org/wiki/Three_Sisters_(agriculture)
Companion planting: https://en.wikipedia.org/wiki/Companion_planting
Nitrogen fixation: https://en.wikipedia.org/wiki/Nitrogen_fixation
Most plants do not absorb atmospheric nitrogen, but need external nitride fertilizer to grow! That causes serious ground water polution!
> The new biofuel subsidies require no-till farming practices
This actually depletes soil of nitrogen!
Why do you believe that no-till farming practices deplete soil of nitrogen more than tilling?
A plausible hypothesis: tilling destroys the bacteria that get nitrogen to plant roots.
Isn't runoff erision the primary preventable source of nitrogen depletion?
FWIU residue mulch initially absorbs atmospheric nitrogen instead of the soil absorbing it, but that residue and its additional nitrogen eventually decays into the soil.
I have heard that it takes something like five years to successfully completely transform acreage with no-till; and then it's relatively soft and easily plantable and not impacted, so it absorbs and holds water.
No-till farmers are not lacking soil samples.
What would be a good test of total change in soil nitrogen content (and runoff) given no-till and legacy farming practices?
With pressure-injection seeders and laser weeders, how many fewer chemicals are necessary for pro farming?
String Types Considered Harmful
But a lack of string types (or tagged strings) results in injection vulnerabilities: OS, SQL, XSS (JS, CSS, HTML), XML, URI, query string,.
How should template autoescaping be implemented [in Zig without string types or type-tagged strings]?
E.g. Jinja2 implements autoescaping with MarkupSafe; strings wrapped in a Markup() type will not be autoescaped because they already have an .__html__() method.
MarkupSafe: https://pypi.org/project/MarkupSafe/
Some time ago, I started to create a project called "strypes" to teach or handle typed strings and escaping correctly.
"Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')" https://cwe.mitre.org/data/definitions/74.html
How to Write a Fast Matrix Multiplication from Scratch with Tensor Cores (2024)
Multiplication algorithm: https://en.wikipedia.org/wiki/Multiplication_algorithm
From https://news.ycombinator.com/item?id=40519828 re: LLMs and matrix multiplication with tensors:
> "You Need to Pay Better Attention" (2024) https://arxiv.org/abs/2403.01643 :
>> Our first contribution is Optimised Attention, which performs similarly to standard attention, but has 3/4 as many parameters and one matrix multiplication fewer per head. Next, we introduce Efficient Attention, which performs on par with standard attention with only 1/2 as many parameters as many parameters and two matrix multiplications fewer per head and is up to twice as fast as standard attention. Lastly, we introduce Super Attention, which surpasses standard attention by a significant margin in both vision and natural language processing tasks while having fewer parameters and matrix multiplications.
From "Transformer is a holographic associative memory" (2025) https://news.ycombinator.com/item?id=43029899 .. https://westurner.github.io/hnlog/#story-43028710 :
>>> Convolution is in fact multiplication in Fourier space (this is the convolution theorem [1]) which says that Fourier transforms convert convolutions to products.
From https://news.ycombinator.com/item?id=41322088 :
> "A carbon-nanotube-based tensor processing unit" (2024)
"Karatsuba Matrix Multiplication and Its Efficient Hardware Implementations" (2025) https://arxiv.org/abs/2501.08889 .. https://news.ycombinator.com/item?id=43372227
Google Search to redirect its country level TLDs to Google.com
I wonder if this is related to the first party cookie security model. That is supposedly why Google switched maps from maps.google.com to www.google.com/maps. Running everything off a single subdomain of a single root domain should allow better pooling of data.
Subdomains were chosen historically because it was the sane way to run different infrastructure for each service. Nowadays, with the globally distributed frontends that Google's Cloud offers, path routing and subdomain routing are mostly equivalent. Subdomains are archaic, they are exposing the architecture (separate services) to users out of necessity. I don't think cookies were the motivation but it's probably a nice benefit.
https://cloud.google.com/load-balancing/docs/url-map-concept...
Has anything changed about the risks of running everything with the same key, on the apex domain?
Why doesn't Google have DNSSEC.
To a first approximation, nobody has DNSSEC. It's not very good.
DNSSEC is necessary like GPG signatures are necessary; though also there are DoH/DoT/DoQ and HTTPS.
Google doesn't have DNSSEC because they've chosen not to implement it, FWIU.
/? DNSSEC deployment statistics: https://www.google.com/search?q=dnssec+deployment+statistics...
If not DNSSEC, then they should push another standard for signing DNS records (so that they are signed at rest (and encrypted in motion)).
Do DS records or multiple TLDs and x.509 certs prevent load balancing?
Were there multiple keys for a reason?
So, not remotely necessary at all? Neither DNSSEC nor GPG have any meaningful penetration in any problem domain. GPG is used for package signing in some language ecosystems, and, notoriously, those signatures are all busted (Python is the best example). They're both examples of failed 1990s cryptosystems.
Do you think the way Debian uses gpg signatures for package verification is also broken?
Red Hat too.
Containers, pip, and conda packages have TUF and now there's sigstore.dev and SLSA.dev. W3C Verifiable Credentials is the open web standard JSONLD RDF spec for signatures/attestations.
IDK how many reinventions of GPG there are.
Do all of these systems differ only in key distribution and key authorization, ceteris paribus?
The Chemistry Trick Poised to Slash Steel's Carbon Footprint
> Their process, which uses saltwater and iron oxide instead of carbon-heavy blast furnaces, has been optimized to work with naturally sourced materials. By identifying low-cost, porous iron oxides that dramatically boost efficiency, the team is laying the groundwork for large-scale, eco-friendly steel production. And with help from engineers and manufacturers, they’re pushing this green tech closer to the real world.
ScholarlyArticle: "Pathways to Electrochemical Ironmaking at Scale Via the Direct Reduction of Fe2O3" (2025) https://pubs.acs.org/doi/10.1021/acsenergylett.5c00166 https://doi.org/10.1021/acsenergylett.5c00166
Hypertext TV
thought this was going to be about things like https://en.wikipedia.org/wiki/Hybrid_Broadcast_Broadband_TV (hypertext on tv)
Same. Thanks; TIL about HbbTV: Hybrid Broadcast Broadband TV: https://en.wikipedia.org/wiki/Hybrid_Broadcast_Broadband_TV
Had been wondering how to add a game clock below the TV that syncs acceptably; a third screen: https://news.ycombinator.com/item?id=30890265
Show HN: A VS Code extension to visualise Rust logs in the context of your code
We made a VS Code extension [1] that lets you visualise logs and traces in the context of your code. It basically lets you recreate a debugger-like experience (with a call stack) from logs alone.
This saves you from browsing logs and trying to make sense of them outside the context of your code base.
We got this idea from endlessly browsing traces emitted by the tracing crate [3] in the Google Cloud Logging UI. We really wanted to see the logs in the context of the code that emitted them, rather than switching back-and-forth between logs and source code to make sense of what happened.
It's a prototype [2], but if you're interested, we’d love some feedback.
---
References:
[1]: VS Code: marketplace.visualstudio.com/items?itemName=hyperdrive-eng.traceback
[2]: Github: github.com/hyperdrive-eng/traceback
[3]: Crate: docs.rs/tracing/latest/tracing
Good idea!
This probably saves resources by eliminating need to re-run code to walk through error messages again.
Integration with time-travel debugging would even more useful; https://news.ycombinator.com/item?id=30779019
From https://news.ycombinator.com/item?id=31688180 :
> [ eBPF; Pixie, Sysdig, Falco, kubectl-capture,, stratoshark, ]
> Jaeger (Uber contributed to CNCF) supports OpenTracing, OpenTelemetry, and exporting stats for Prometheus.
From https://news.ycombinator.com/item?id=39421710 re: distributed tracing:
> W3C Trace Context v1: https://www.w3.org/TR/trace-context-1/#overview
Thanks for sharing all these links, super handy! I really appreciate it.
NP; improving QA feedback loops with IDE support is probably as useful as test coverage and test result metrics
/? vscode distributed tracing: https://www.google.com/search?q=vscode+distributed+tracing :
- jaegertracing/jaeger-vscode: https://github.com/jaegertracing/jaeger-vscode
/? line-based display of distributed tracing information in vs code: https://www.google.com/search?q=line-based%20display%20of%20... :
- sprkl personal observability platform: https://github.com/sprkl-dev/use-sprkl
Theoretically it should be possible to correlate deployed code changes with the logs and traces preceding 500 errors; and then recreate the failure condition given a sufficient clone of production (in CI) to isolate and verify the fix before deploying new code.
Practically then, each PR generates logs, traces, and metrics when tested in a test deployment and then in production. FWIU that's the "personal" part of sprkl.
Thanks for sharing, first time I hear about sprkl.dev
'Cosmic radio' detector could discover dark matter within 15 years
From https://news.ycombinator.com/item?id=42376759 :
> FWIU this Superfluid Quantum Gravity rejects dark matter and/or negative mass in favor of supervaucuous supervacuum, but I don't think it attempts to predict other phases and interactions like Dark fluid theory?
From https://news.ycombinator.com/item?id=42371946 :
> Dark fluid: https://en.wikipedia.org/wiki/Dark_fluid :
>> Dark fluid goes beyond dark matter and dark energy in that it predicts a continuous range of attractive and repulsive qualities under various matter density cases. Indeed, special cases of various other gravitational theories are reproduced by dark fluid, e.g. inflation, quintessence, k-essence, f(R), Generalized Einstein-Aether f(K), MOND, TeVeS, BSTV, etc. Dark fluid theory also suggests new models, such as a certain f(K+R) model that suggests interesting corrections to MOND that depend on redshift and density
High-voltage hydrogel electrolytes enable safe stretchable Li-ion batteries
Isolated Execution Environment for eBPF
From https://news.ycombinator.com/item?id=43553198 .. https://news.ycombinator.com/item?id=43564972 :
> Can [or should] a microkernel run eBPF? [or WASM?]
The performance benefits of running eBPF in the kernel are substantial and justifying, but how much should a kernel or a microkernel do?
Ask HN: Why is there no better protocol support for WiFi captive portals?
I'm curious why we still rely on hacky techniques like requesting captive.apple.com and waiting for interception, rather than having proper protocol-level support built into WPA. Why can't the WPA protocol simply announce that authentication requires a captive portal?
This seems like every public hotspot I connect to it's flakey and will sometimes report it's connected when it still requires captive portal auth. Or even when it does work it's a 15 second delay before the captive screen pops-up. Shouldn't this have been solved properly by now.
Does anyone have insight into the technical or historical reasons this remains so messy? If the wireless protocol could announce to the client thru some standard, that they have to complete auth via HTTP I feel the clients could implement much better experience.
Related issue: secured DNS must downgrade/fallback to unsecured DNS because of captive portal DNS redirection (because captive portals block access to DNS until the user logs in, and the user can't log into the captive portal without DNS redirection that is prevented by DoH, DoT, and DoQ).
Impact: if you set up someone's computer to use secured DNS only, and their device doesn't have per-SSID connection profiles, then they can't use captive portal hotspot Wi-Fi without knowing how to disable secured DNS.
"Do not downgrade to unsecured DNS unless it's an explicitly authorized captive portal"
IIRC there's a new-ish way to configure DNS-over-HTTPS over DHCP like there is for normal DNS.
Bilinear interpolation on a quadrilateral using Barycentric coordinates
/? Barycentric
From "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
Phase from second order amplitude FWIU
Universal photonic artificial intelligence acceleration
"Universal photonic artificial intelligence acceleration" (2025) https://www.nature.com/articles/s41586-025-08854-x :
> Abstract: [...] Here we introduce a photonic AI processor that executes advanced AI models, including ResNet [3] and BERT [20,21], along with the Atari deep reinforcement learning algorithm originally demonstrated by DeepMind [22]. This processor achieves near-electronic precision for many workloads, marking a notable entry for photonic computing into competition with established electronic AI accelerators [23] and an essential step towards developing post-transistor computing technologies.
Photon-based chips vs electron-based chips? It's interesting that photons and electrons are closely related: two neutral photons can become an electron-positron pair and vice versa.
The paper mentions that their photonic chip is less precise than an electronic one, but this looks like an advantage for AI. In fact, the stubborn precision of electron-based processors that erase the quantum nature of electrons is what I think is preventing the creation of real AI. In other words, if a microprocessor is effectively a deterministic complex mechanism, it won't become AI no matter what, but if the quantum nature is let loose, at least slightly, interesting things will start to happen.
There are certainly infinitely non-halting automata on deterministic electronic processors.
Abstractly, in terms of constructor theory, the non-halting task is independent from a constructor, which is implemented with a computation medium.
FWIU from reading a table on wikipedia about physical platforms for QC it would be possible to do quantum computing with just electron voltage but typical component quality.
So phase; certain phases.
And now parametic single photon emission and detection.
"Low-noise balanced homodyne detection with superconducting nanowire single-photon detectors" (2024) https://news.ycombinator.com/item?id=39537236
"A physical [photonic] qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929
"Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315 .. https://arxiv.org/abs/2302.05967 .. "Study of photons in QC reveals photon collisions in matter create vortices" https://news.ycombinator.com/item?id=40600736
"How do photons mediate both attraction and repulsion?" (2025) [as phonons in matter] https://news.ycombinator.com/item?id=42661511 notes re: recent findings with photons ("quanta")
Fedora change aims for 99% package reproducibility
This goal feels like a marketing OKR to me. A proper technical goal would be "all packages, except the ones that have a valid reason, such as signatures, not to be reproducible".
As someone who dabbles a bit in the RHEL world, IIRC all packages in Fedora are signed. In additional the DNF/Yum meta-data is also signed.
IIRC I don't think Debian packages themselves are signed themselves but the apt meta-data is signed.
I learned this from an ansible molecule test env setup script for use in containers and VMs years ago; because `which` isn't necessarily installed in containers for example:
type -p apt && (set -x; apt install -y debsums; debsums | grep -v 'OK$') || \
type -p rpm && rpm -Va # --verify --all
dnf reads .repo files from /etc/yum.repos.d/ [1] which have various gpg options; here's an /etc/yum.repos.d/fedora-updates.repo: [updates]
name=Fedora $releasever - $basearch - Updates
#baseurl=http://download.example/pub/fedora/linux/updates/$releasever/Everything/$basearch/
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
enabled=1
countme=1
repo_gpgcheck=0
type=rpm
gpgcheck=1
metadata_expire=6h
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False
From the dnf conf docs [1], there are actually even more per-repo gpg options: gpgkey
gpgkey_dns_verification
repo_gpgcheck
localpkg_gpgcheck
gpgcheck
1. https://dnf.readthedocs.io/en/latest/conf_ref.html#repo-opti...2. https://docs.ansible.com/ansible/latest/collections/ansible/... lists a gpgcakey parameter for the ansible.builtin.yum_repository module
For Debian, Ubuntu, Raspberry Pi OS and other dpkg .deb and apt distros:
man sources.list
man sources.list | grep -i keyring -C 10
# trusted:
# signed-by:
# /etc/apt/ trusted.gpg.d/
man apt-secure
man apt-key
apt-key help
less "$(type -p apt-key)"
signing-apt-repo-faq:
https://github.com/crystall1nedev/signing-apt-repo-faqFrom "New requirements for APT repository signing in 24.04" (2024) https://discourse.ubuntu.com/t/new-requirements-for-apt-repo... :
> In Ubuntu 24.04, APT will require repositories to be signed using one of the following public key algorithms: [ RSA with at least 2048-bit keys, Ed25519, Ed448 ]
> This has been made possible thanks to recent work in GnuPG 2.4 82 by Werner Koch to allow us to specify a “public key algorithm assertion” in APT when calling the gpgv tool for verifying repositories.
The Mutable OS: Why Isn't Windows Immutable in 2025?
Hey all—this is something I’ve been thinking about for a while in my day-to-day as a desktop support tech. We’ve made huge strides in OS security, but immutability is still seen as exotic, and I don’t think it should be. Curious to hear thoughts or counterpoints from folks who’ve wrestled with these same issues.
I'm working with rpm-ostree distros on workstations. The Universal Blue (Fedora Atomic (CoreOS)) project has OCI images that install as immutable host images.
We were able to install programs as admin on Windows in our university computer lab because of DeepFreeze, almost 20 years ago
"Is DeepFreeze worth it?" https://www.reddit.com/r/sysadmin/comments/18zn3jn/is_deepfr...
TIL Windows has UWF built-in:
"Unified Write Filter (UWF) feature" https://learn.microsoft.com/en-us/windows/configuration/unif...
Re: ~immutable NixOS and SELinux and Flatpaks' chroot filesystems not having SELinux labels like WSL2 either: https://news.ycombinator.com/item?id=43617363
Huh, I had no idea that UFW was a feature of Windows and I'm kind of surprised to not see more widespread adoption for workstation rollouts. DeepFreeze was great (excepting updates and other minor issues) and actively reduced a lot of nuisance issues that we might otherwise have to deal with when I worked for a school.
> On September 20, 2024, Microsoft announced that Windows Server Update Service would no longer be developed starting with Windows Server 2025.[4] Microsoft encourages business to adopt cloud-based solution for client and server updates, such as Windows Autopatch, Microsoft Intune, and Azure Update Manager. [5]
WSUS Offline installer is also deprecated now.
And then to keep userspace updated too, a package manager like Chocolatey NuGet and this power shell script: https://github.com/westurner/dotfiles/blob/develop/scripts/s...
Universal Blue immutable OCI images;
ublue-os/main: https://github.com/ublue-os/main :
> OCI base images of Fedora with batteries included
ublue-os/image-template: https://github.com/ublue-os/image-template :
> Build your own custom Universal Blue Image!
Microsoft took Torvalds, who also devs on Fedora FWIU.
systemd/particleos is an immutable Linux distribution built with mkosi:
"systemd ParticleOS" (2025) https://news.ycombinator.com/item?id=43649088
Immutable:
Idempotent:
Ansible is designed for idempotent tasks; that do not further change state if re-run.
Windows Containers are relatively immutable. Docker Desktop and Podman Desktop include a copy of k8s kubernetes and also kubectl IIRC
Do GUI apps run in Windows containers?
The MSIX apps can opt to run in a sandbox. It's not perfect, but it's _something_. Plus MSIX helps ensure clean install/uninstall as well as delta updates.
Again, not perfect, but serviceable.
fedora/toolbox and distrobox create containers that can access the X socket and /dev/dri/ to run GUI apps from containers.
Flatpaks share (GNOME,KDE,NVIDIA,podman,) runtimes, by comparison.
Re: MSIX https://news.ycombinator.com/item?id=23394302 :
> MSIX only enforces a sandbox if an application doesn’t elect to use the restricted capabilities that allow it to run without. File system and registry virtualization can be disabled quite easily with a few lines in the package manifest, as well as a host of other isolation features.
Flatseal and KDE and Gnome can modify per-flatpak permissions. IDK if there's a way to do per-flatpak-instance permissions, like containers.
MOUNT --type=cacheMan pages are great, man readers are the problem
I disagree. I have been writing man pages for a while, and mastering the language is hard. The documentation for both mdoc and mandb format is not covering the whole language, and the only remaining reference for roff itself seems to be the book by Brian Kernigham. mdoc and mandb are like a macro set on top of roff.
Just this week i considered proposing to $distro to convert all manpages to markdown as part of the build process, and then use a markdown renderer on the shipped system to display them. This would allow the distro to stop shipping *roff per default.
Markdown profits from the much larger amount of tooling. There are a ton of WYSIWYG editors that would allow non-technical users to write such documentation. I imagine we would all profit if creating manual pages was that easy.
On the other side, Markdown is even less formalized. Its like 15 different dialects from different programs that differ in their feature sets and parsing rules. I do not believe things like "How do i quote an asterisk or underscore so it is rendered verbatim" can be portably achieved in Markdown.
RST/Sphinx solves that problem in that there is a single canonical dialect, and it's already effectively used in many very large projects, including the Linux kernel and (obviously) the Python programming language.
Another big plus in my book is that rST has a much more well defined model for extensions than Markdown.
Running CERN httpd 3.0A from 1996 (2022)
A CVE from the year 2000: https://www.cve.org/CVERecord?id=CVE-2000-0079
Do SBOM tools identify web servers that old?
EngFlow Makes C++ Builds 21x Faster and Software a Lot Safer
Fixing the Introductory Statistics Curriculum
I took AP Stat in HS and then College Stats in college unnecessarily. In HS it was TI-83 calculators, and in college it was excel with optional minitab. R was new then
I remember ANOVA being the pinnacle of AP Stat. There are newer ANOVA-like statistical procedures like CANOVA;
"Efficient test for nonlinear dependence of two continuous variables" (2015) https://pmc.ncbi.nlm.nih.gov/articles/PMC4539721/
Estimators:
Yellowbrick implements the scikit-learn Estimator API: https://www.scikit-yb.org/en/latest/
"Developing scikit-learn estimators" https://scikit-learn.org/stable/developers/develop.html#esti... :
> All estimators implement the fit method:
estimator.fit(X, y)
> Out of all the methods that an estimator implements, fit is usually the one you want to implement yourself. Other methods such as set_params, get_params, etc. are implemented in BaseEstimator, which you should inherit from. You might need to inherit from more mixins, which we will explain later.https://scikit-learn.org/stable/modules/generated/sklearn.pi...
Sklearn Glossary > estimator: https://scikit-learn.org/stable/glossary.html#term-estimator
https://news.ycombinator.com/item?id=41311052
https://news.ycombinator.com/item?id=28523442
What is GridSearchCV and what are ways to find optimal faster than brute force grid search?
IRL case studies with applied stats and ML and AI:
- "ML and LLM system design: 500 case studies to learn from" https://news.ycombinator.com/item?id=43629360
Additional stats resources:
- "Seeing Theory" Brown https://seeing-theory.brown.edu/
- "AP®/College Statistics" Khan Academy https://www.khanacademy.org/math/ap-statistics
- "Think Stats: 3rd Edition" notebooks https://github.com/AllenDowney/ThinkStats/tree/v3
And physics and information theory in relation to stats as a field with many applications:
- "Information Theory: A Tutorial Introduction" (2019) https://arxiv.org/abs/1802.05968 .. https://g.co/kgs/sPha7qR
- Entropy > Statistical mechanics: https://en.wikipedia.org/wiki/Entropy#Statistical_mechanics
- Statistical mechanics: https://en.wikipedia.org/wiki/Statistical_mechanics
- Quantum Statistical mechanics: https://en.wikipedia.org/wiki/Quantum_statistical_mechanics
Statistical procedures are inferential procedures.
There are inductive, deductive, and abductive methods of inference.
Statistical inference: https://en.wikipedia.org/wiki/Statistical_inference
Statistical literacy: https://en.wikipedia.org/wiki/Statistical_literacy
Also, the ROC curve Wikipedia sidebar: https://en.wikipedia.org/wiki/Receiver_operating_characteris...
What is the difference between accuracy, precision, specificity, and sensitivity?
Khan Academy has curriculum alignment codes on their OER content, but not yet Schema.org/about and or :educationalAlignment for their https://schema.org/LearningResource (s)
A step towards life on Mars? Lichens survive Martian simulation in new study
"Ionizing radiation resilience: how metabolically active lichens endure exposure to the simulated Mars atmosphere" (2025) https://imafungus.pensoft.net/article/145477/
We've outsourced our confirmation biases to search engines
To write an unbiased systematic review, there are procedures.
In context to the scientific method, isn't RAG also wrong?
Generate a bad argument and then source support for it with RAG or by only searching for confirmation biased support.
I suppose I make this mistake too; I don't prepare systematic reviews, so my research meta-procedure has always been inadequate.
Usually I just [...] but a more scientific procedure would be [...].
Baby Steps into Genetic Programming
Genetic programming: https://en.wikipedia.org/wiki/Genetic_programming
Evolutionary computation > History: https://en.wikipedia.org/wiki/Evolutionary_computation #History
- "The sad state of property-based testing libraries" re coverage-guided fuzzing and other Hilbert spaces: https://news.ycombinator.com/item?id=40884466
- re: MOSES and Combo (a Lisp) and now Python too in re: "Show HN: Codemodder – A new codemod library for Java and Python" and libCST and AST and FST; https://news.ycombinator.com/item?id=39139198
- opencog/asmoses implements MOSES on AtomSpace, a hypergraph for algorithmic graph rewriting: https://github.com/opencog/asmoses
SELinux on NixOS
There's also the Fedora SELinux policy and the container-selinux policy set.
Rootless containers with podman lack labels like virtualenvs and NixOS.
Distrobox and toolbox set a number of options for rootless and regular containers;
--userns=keepid
--security-opt=label=disable
-v /tmp/path:/tmp/path:Z
-v /tmp/path:/tmp/path:z
--gpus all
"How Nix Works"
https://nixos.org/guides/how-nix-works/ :How Nix works: builds have a cache key that's a hash of all build parameters - /nix/store/<cachekey> - so that atomic upgrades and rollback work.
How NixOS works: config files are also snapshotted at a cache key for rollback and atomic upgrades that don't fail if interrupted mid-package-install.
> A big implication of the way that Nix/NixOS stores packages is that there is no /bin, /sbin, /lib, /usr, and so on. Instead all packages are kept in /nix/store. (The only exception is a symlink /bin/sh to Bash in the Nix store.) Not using ‘global’ directories such as /bin is what allows multiple versions of a package to coexist. Nix does have a /etc to keep system-wide configuration files, but most files in that directory are symlinks to generated files in /nix/store.
But which files should be assigned which extended filesystem attribute labels; signed packages only, local builds of [GPU drivers, out-of-tree modules,], not-yet-packaged things like ProtonGE;
I remembered hearing that Bazzite ships with Nix.
This [1] describes installing with the nix-determinate-installer [2] and then [3] installing home-manager :
nix run nixpkgs#home-manager -- switch --flake nix/#$USER
[1] https://universal-blue.discourse.group/t/issue-installing-ce...[2] https://github.com/DeterminateSystems/nix-installer
[3] https://nixos.wiki/wiki/Home_Manager
Working with rpm-ostree; upgrading the layered firefox RPM without a reboot requires -A/--apply-live (which runs twice) and upgrading the firefox flatpak doesn't require a reboot, but SELinux policies don't apply to flatpaks which run unconfined FWIU.
"SEC: Flatpak Chrome, Chromium, and Firefox run without SELinux confinement?" https://discussion.fedoraproject.org/t/sec-flatpak-chrome-ch...
From https://news.ycombinator.com/item?id=43564972 :
> Flatpaks bypass selinux and apparmor policies and run unconfined (on DAC but not MAC systems) because the path to the executable in the flatpaks differs from the system policy for /s?bin/* and so wouldn't be relabeled with the necessary extended filesystem attributes even on `restorecon /` (which runs on reboot if /.autorelabel exists).*
Having written a venv_relabel.sh script to copy selinux labels from /etc onto $VIRTUAL_ENV/etc , IDK; consistently-built packages are signed and maintained. The relevant commands from that script: https://github.com/westurner/dotfiles/blob/develop/scripts/v... :
sudo semanage fcontext --modify -e "${_equalpth}" "${_path}";
sudo restorecon -Rv "${_path}";
Is something like this necessary for every /nix/store/<key> directory? sudo semanage fcontext --modify -e "/etc" "${VIRTUAL_ENV}/etc";
sudo restorecon -Rv "${VIRTUAL_ENV}/etc";
Or is there a better way to support chroots with selinux with or without extending the existing SELinux functionality?Max severity RCE flaw discovered in widely used Apache Parquet
Does anyone know if pandas is affected? I serialize/deserialize dataframes which pandas uses parquet under the hood.
Pandas doesn't use the parquet python package under the hood: https://pandas.pydata.org/docs/reference/api/pandas.read_par...
> Parquet library to use. If ‘auto’, then the option io.parquet.engine is used. The default io.parquet.engine behavior is to try ‘pyarrow’, falling back to ‘fastparquet’ if ‘pyarrow’ is unavailable.
Those should be unaffected.
Python pickles have the same issue but it is a design decision per the docs.
Python docs > library > pickle: https://docs.python.org/3/library/pickle.html
Re: a hypothetical pickle parser protocol that doesn't eval code at parse time; "skipcode pickle protocol 6: "AI Supply Chain Attack: How Malicious Pickle Files Backdoor Models" .. "Insecurity and Python Pickles" : https://news.ycombinator.com/item?id=43426963
But python pickle is only supposed to be used with trusted input, so it’s not a vulnerability.
[deleted]
A university president makes a case against cowardice
[flagged]
Then they would need to tax nonprofit religious organizations too.
Why don't they just make the special interests pay their own multi-trillion dollar war bills instead of sabotaging US universities with surprise taxes?
If you increase expenses and cut revenue, what should you expect for your companies?
Why not just make a flat tax for everyone and end all the special interest pandering and exceptions for the rich. It is a poisonous misapplication of the time of our government to constantly be fiddling with tax code to favor one group or another.
Because a lot of people, including many economists, believe capital accumulating endlessly to the same class of thousand-ish people is bad. A flat income tax exacerbates wealth inequality considerably.
Our tax now is worse than flat. Warren buffet brags about paying less % than his secretary.
Either compare ideal tax structures with “no loopholes” (none of these exist in the real world) or compare actually-existing tax structures.
Comparing your ideal flat income tax with the current system is apples to oranges.
>>Why don't they just make the special interests pay their own multi-trillion dollar war bills instead of sabotaging US universities with surprise taxes?
>Either compare ideal tax structures with “no loopholes” (none of these exist in the real world) or compare actually-existing tax structures.
Hence I cannot compare your suggestion with the current system as it is apple to oranges because loopholes would exist.
My thesis is a flat tax would help to minimize the very loopholes you damn. The larger the tax code and the more it panders to particular interest, generally the more opportunity for 'loopholes.'
I don't want to work for a business created by, uh, upper class folks that wouldn't have done it if not for temporary tax breaks by a pandering grifter executive.
I believe in a strong middle class and upward mobility for all.
I don't think we want businesses that are dependent on war, hate, fear, and division for continued profitability.
I don't know whether a flat or a regressive or a progressive tax system is more fair or more total society optimal.
I suspect it is true that, Higher income individuals receive more total subsidies than lower-income individuals.
You don't want a job at a firm that an already-wealthy founder could only pull off due to short-term tax breaks and wouldn't have founded if taxes go any higher.
You want a job at a firm run by people who are going to keep solving for their mission regardless of high taxes due to immediately necessary war expenses, for example.
In the interests of long-term economic health and national security of the United States, I don't think they should be cutting science and medical research funding.
Science funding has positive returns. Science funding has greater returns than illegal wars (that still aren't paid for).
Find 1980 on these charts of tax receipts, GDP, and income inequality: https://news.ycombinator.com/item?id=43140500 :
> "Federal Receipts as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/FYFRGDA188S
> "Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
From https://news.ycombinator.com/item?id=43220833 re: income inequality:
> GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
Find 1980 on a GINI index chart.
Yeah, I mean, I think we agree on most points.
I think there’s too many confounding economic factors to look at GINI alone and conclude the 1980 turning point was caused by nerfing the top income tax bracket. But a compelling argument could probably be made with more supporting data, which of course this margin is too narrow to contain and etc.
Better would be to remove inheritance after death, instead distributing that wealth among the citizenship equally.
List of countries by inheritance tax rates: https://en.wikipedia.org/wiki/List_of_countries_by_inheritan...
InitWare, a portable systemd fork running on BSDs and Linux
Shoot. Almost there, at least for us cybersecurity-minded folks.
A need for a default-deny-all and then select what a process needs is the better security granularity.
This default-ALLOW-all is too problematic for today's (and future) security needs.
Cuts down on the compliance paperworks too.
DAC: Discretionary Access Control: https://en.wikipedia.org/wiki/Discretionary_access_control :
> The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control).
Which permissions and authorizations can be delegated?
DAC is the out of the box SELinux configuration for most Linux distros; some processes are confined, but if the process executable does not have the necessary extended filesystem attribute labels the process runs unconfined; default allow all.
You can see which processes are confined with SELinux contexts with `ps -Z`.
MAC is default deny all;
MAC: Mandatory Access Control: https://en.wikipedia.org/wiki/Mandatory_access_control
Biggest problem is the use of a SELinux compiler into components understood only by SELinux engine.
Does not help when the SELinux source text file is not buildable by function/procedure axiom: it is at its grittiest granularity, which ironically is the best kind of security, but only if composed by the most savviest SELinux system admins.
Often requires full knowledge of any static/dynamic libraries and any additional dynamic libraries it calls and its resource usages.
Additional frontend UI will be required to proactively determine suitability with those dynamic libraries before any ease of SELinux deployment.
For now, it is a trial and error in part on those intermediate system admins or younger.
From https://news.ycombinator.com/item?id=30025477 :
> [ audit2allow, https://stopdisablingselinux.com/ ]
Applications don't need to be compiled with selinux libraries unless they want to bypass CLI tools like chcon and restorecon (which set extended filesystem attributes according to the system policy; typically at package install time if the package provenance is sufficient) by linking with libselinux.
Deterministic remote entanglement using a chiral quantum interconnect
From https://news.ycombinator.com/item?id=43044159 :
>>> Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
ScholarlyArticle: "Deterministic remote entanglement using a chiral quantum interconnect" (2025) https://www.nature.com/articles/s41567-025-02811-1
The state of binary compatibility on Linux and how to address it
I don't understand why they don't just statically link their binaries. First, they said this:
> Even if you managed to statically link GLIBC—or used an alternative like musl—your application would be unable to load any dynamic libraries at runtime.
But then they immediately said they actually statically link all of their deps aside from libc.
> Instead, we take a different approach: statically linking everything we can.
If they're statically linking everything other than libc, then using musl or statically linking glibc will finish the job. Unless they have some need for loading share libs at runtime which they didn't already have linked into their binary (i.e. manual dlopen), this solves the portability problem on Linux.
What am I missing (assuming I know of the security implications of statically linked binaries -- which they didn't mention as a concern)?
And please, statically linking everything is NOT a solution -- the only reason I can run some games from 20 years ago still on my recent Linux is because they didn't decide to stupidly statically link everything, so I at least _can_ replace the libraries with hooks that make the games work with newer versions.
As long as the library is available.
Neither static nor dynamic linking is looking to solve the 20 year old binaries issue, so both will have different issues.
But I think it's easier for me to find a 20 year old ISO of a Red Hat/Slackware where I can simply run the statically linked binary. Dependency hell for older distros become really difficult when the older packages are not archived anywhere anymore.
It's interesting to think how a 20 year old OS plus one program is probably a smaller bundle size than many modern Electron apps ostensibly built "for cross platform compatibility". Maybe microkernels are the way.
How should a microkernel run (WASI) WASM runtimes?
Docker can run WASM runtimes, but I don't think podman or nerdctl can yet.
From https://news.ycombinator.com/item?id=38779803 :
docker run \
--runtime=io.containerd.wasmedge.v1 \
--platform=wasi/wasm \
secondstate/rust-example-hello
From https://news.ycombinator.com/item?id=41306658 :> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
Native containers run on the host and can host normal containers if a container engine is installed. Compared to an electron runtime, IDK how minimal a native container with systemd and podman, and WASM runtimes, and portable GUI rendering libraries could be.CoreOS - which was for creating minimal host images that host containers - is now Fedora Atomic is now Fedora Atomic Desktops and rpm-ostree. Silverblue, Kinoite, Sericea; and Bazzite and Secure Blue.
Secureblue has a hardened_malloc implementation.
From https://jangafx.com/insights/linux-binary-compatibility :
> To handle this correctly, each libc version would need a way to enumerate files across all other libc instances, including dynamically loaded ones, ensuring that every file is visited exactly once without forming cycles. This enumeration must also be thread-safe. Additionally, while enumeration is in progress, another libc could be dynamically loaded (e.g., via dlopen) on a separate thread, or a new file could be opened (e.g., a global constructor in a dynamically loaded library calling fopen).
FWIU, ROP Return-Oriented Programming and Gadgets approaches have implementations of things like dynamic header discovery of static and dynamic libraries at runtime; to compile more at runtime (which isn't safe, though: nothing reverifies what's mutated after loading the PE into process space, after NX tagging or not, before and after secure enclaves and LD_PRELOAD (which some go binaries don't respect, for example).
Can a microkernel do eBPF?
What about a RISC machine for WASM and WASI?
"Customasm – An assembler for custom, user-defined instruction sets" (2024) https://news.ycombinator.com/item?id=42717357
Maybe that would shrink some of these flatpaks which ship their own Electron runtimes instead of like the Gnome and KDE shared runtimes.
Python's manylinux project specifies a number of libc versions that manylinux packages portably target.
Manylinux requires a tool called auditwheel for Linux, delicate for MacOS, and delvewheel for windows;
Auditwheel > Overview: https://github.com/pypa/auditwheel#overview :
> auditwheel is a command line tool to facilitate the creation of Python wheel packages for Linux (containing pre-compiled binary extensions) that are compatible with a wide variety of Linux distributions, consistent with the PEP 600 manylinux_x_y, PEP 513 manylinux1, PEP 571 manylinux2010 and PEP 599 manylinux2014 platform tags.
> auditwheel show: shows external shared libraries that the wheel depends on (beyond the libraries included in the manylinux policies), and checks the extension modules for the use of versioned symbols that exceed the manylinux ABI.
> auditwheel repair: copies these external shared libraries into the wheel itself, and automatically modifies the appropriate RPATH entries such that these libraries will be picked up at runtime. This accomplishes a similar result as if the libraries had been statically linked without requiring changes to the build system. Packagers are advised that bundling, like static linking, may implicate copyright concerns
github/choosealicense.com: https://github.com/github/choosealicense.com
From https://news.ycombinator.com/item?id=42347468 :
> A manylinux_x_y wheel requires glibc>=x.y. A musllinux_x_y wheel requires musl libc>=x.y; per PEP 600
Return oriented programming: https://en.wikipedia.org/wiki/Return-oriented_programming
/? awesome return oriented programming sire:github.com https://www.google.com/search?q=awesome+return+oriented+prog...
This can probably find multiple versions of libc at runtime, too: https://github.com/0vercl0k/rp :
> rp++ is a fast C++ ROP gadget finder for PE/ELF/Mach-O x86/x64/ARM/ARM64 binaries.
> How should a microkernel run (WASI) WASM runtimes?
Same as any other kernel—the runtime is just a userspace program.
> Can a microkernel do eBPF?
If it implements it, why not?
Should a microkernel implement eBPF and WASM, or, for the same reasons that justify a microkernel should eBPF and most other things be confined or relegated or segregated in userspace; in terms of microkernel goals like separation of concerns and least privilege and then performance?
Linux containers have process isolation features that userspace sandboxes like bubblewrap and runtimes don't.
Flatpaks bypass selinux and apparmor policies and run unconfined (on DAC but not MAC systems) because the path to the executable in the flatpaks differs from the system policy for */s?bin/* and so wouldn't be relabeled with the necessary extended filesystem attributes even on `restorecon /` (which runs on reboot if /.autorelabel exists).
Thus, e.g. Firefox from a signed package in a container on the host, and Firefox from a package on the host are more process-isolated than Firefox in a Flatpak or from a curl'ed statically-linked binary because one couldn't figure out the build system.
Container-selinux, Kata containers, and GVisor further secure containers without requiring the RAM necessary for full VM virtualization with Xen or Qemu; and that is possible because of container interface standards.
Linux machines run ELF binaries, which could include WASM instructions
/? ELF binary WASM : https://www.google.com/search?q=elf+binary+wasm :
mewz-project/wasker https://github.com/mewz-project/wasker :
> What's new with Wasker is, Wasker generates an OS-independent ELF file where WASI calls from Wasm applications remain unresolved.*
> This unresolved feature allows Wasker's output ELF file to be linked with WASI implementations provided by various operating systems, enabling each OS to execute Wasm applications.
> Wasker empowers your favorite OS to serve as a Wasm runtime!
Why shouldn't we container2wasm everything? Because (rootless) Linux containers better isolate the workload than any current WASM runtime in userspace.
Non-Abelian Anyons and Non-Abelian Vortices in Topological Superconductors
"Non-Abelian Anyons and Non-Abelian Vortices in Topological Superconductors" (2023) https://arxiv.org/abs/2301.11614 :
> Abstract: Anyons are particles obeying statistics of neither bosons nor fermions. Non-Abelian anyons, whose exchanges are described by a non-Abelian group acting on a set of wave functions, are attracting a great attention because of possible applications to topological quantum computations. Braiding of non-Abelian anyons corresponds to quantum computations. The simplest non-Abelian anyons are Ising anyons which can be realized by Majorana fermions hosted by vortices or edges of topological superconductors, ν=5/2 quantum Hall states, spin liquids, and dense quark matter. While Ising anyons are insufficient for universal quantum computations, Fibonacci anyons present in ν=12/5 quantum Hall states can be used for universal quantum computations. Yang-Lee anyons are non-unitary counterparts of Fibonacci anyons. Another possibility of non-Abelian anyons (of bosonic origin) is given by vortex anyons, which are constructed from non-Abelian vortices supported by a non-Abelian first homotopy group, relevant for certain nematic liquid crystals, superfluid 3He, spinor Bose-Einstein condensates, and high density quark matter. Finally, there is a unique system admitting two types of non-Abelian anyons, Majorana fermions (Ising anyons) and non-Abelian vortex anyons. That is 3P2 superfluids (spin-triplet, p-wave paring of neutrons), expected to exist in neutron star interiors as the largest topological quantum matter in our universe.
> nematic liquid crystals,
- "Tunable entangled photon-pair generation in a liquid crystal" (2024) https://www.nature.com/articles/s41586-024-07543-5 .. https://news.ycombinator.com/item?id=40815388 .. SPDC entangled photon generation with nematic liquid crystal
NewsArticle: "A liquid crystal source of photon pairs" (2024) https://www.sciencedaily.com/releases/2024/06/240614141916.h...
Sadly, despite a stem PhD, I have no way of assessing whether this is an April Fools submission or not. I recognise some of the words in the abstract, like 'non-Abelian', but that's it.
Maybe it's time to retire.
It is not a joke. Topological anyons are the real deal, someday.
This is an article I found while preparing a comment about superfluid quantum gravity; https://www.reddit.com/r/AskPhysics/comments/1iqvxn0/comment...
I was collecting relevant articles and found this one.
TIL about the connection between superfluid quantum gravity, supersolid spin nematic crystals, and spin-nematic liquid crystals (LCDs,) for waveguiding entangled photons.
And also TIL about anyons in neutron stars, which are typically or always the impetus for black holes.
And also, hey, "3He" again.
Show HN: Offline SOS signaling+recovery app for disasters/wars
A couple of months ago, I built this app to help identify people stuck under rubble.
First responders have awesome tools. But in tough situations, even common folks need to help.
After what happened in Myanmar, we need something like this that works properly.
It has only been tested in controlled environments. It can also be improved; I know BLE is not _that_ effective under rubble.
If you have any feedback or can contribute, don't hold back.
> It can also be improved; I know BLE is not _that_ effective under rubble.
It's a tough problem to solve because you're up against the laws of physics and the very boring (and often counterintuitive) "Antenna Theory". Bluetooth is in the UHF band, and UHF isn't good for penetrating anything let a lone concrete rubble.
To penetrate rubble effectively you really want to be in the ELF-VLF bands, (That's what submarines/mining bots/underground seismic sensors use to get signals out).
Obviously that's ridiculous. Everything from ELF to even HF is impossible to use in a "under the rubble" situation because of physics[1]. Bluetooth (UHF) might be "better than nothing" but you're losing at least 25-30 dBs (which is like 99.99% signal) in 12 inches of concrete rubble. VHF (like a handheld radio) can buy you another 5 inches.
Honestly I think sound waves travel further in such medium than RF waves.
[1]: Your "standard reference dipole" antenna needs to be 1/2 or 1/4 your wave length to resonate. At ELF-VLF range you need an antenna that's 10k-1k feet long. You can play with inductors and loops to electrically lengthen your antenna without physically lengthening it, but you're not gonna get that below 500-200 feet. The length of a submarine is an important design consideration when deciding on what type of radio signal it needs to be able to receive/transmit vs how deep it needs to be for stealth.
What about muon imaging?
What about Rydberg sensors for VLF earth penetrating imaging, at least?
From "3D scene reconstruction in adverse weather conditions via Gaussian splatting" https://news.ycombinator.com/item?id=42900053 :
> Is it possible to see microwave ovens on the ground with a Rydberg antenna array for in-cockpit drone Remote ID signal location?
With a frequency range from below 1 KHz to multiple THz fwiu, Rydberg antennae can receive VLF but IDK about ELF.
IIRC there's actually also a way to transmit with or like Rydberg antennae now; probably with VLF if that's best for the application and there's not already enough backscatter to e.g. infer occlusion with? https://www.google.com/search?q=transmit+with+Rydberg+antenn....
This is pretty cool! I need to learn more about both.
I imagine such fancier tools would be less available among common folks, and more among first responders.
NASA already has the tech to detect heartbeats under rubble using radar [1]. No additional equipment is needed by the rescued. The problem is emergency response can get overwhelmed in large disasters.
If Rydberg sensors would be more common, and new tech is added to mobile devices, this can seriously shift the playing field.
I will look into this, because we need out of the box solutions. Thank you!
[1]: https://www.dhs.gov/archive/detecting-heartbeats-rubble-dhs-...
"The Dark Knight" (Batman, 2009) is the one with the phone-based - is it wifi backscatter imaging - and the societal concerns.
FWIU there are contractor-grade imaging capabilities and there are military-grade see through walls and earth capabilities that law enforcement also have but there challenges with due process.
At the right time of day, with the right ambient temperature, it's possible to see the studs in the walls with consumer IR but only at a distance.
Also, FWIU it's possible to find plastic in the ground - i.e. banned mines - with thermal imaging at certain times of day.
Could there be a transceiver on a post in the truck at the road, with other flying drones to measure backscatter and/or transceiver emissions?
Hopefully NASA or another solvent company builds a product out of their FINDER research (from JPL).
How many of such heartbeat detection devices were ever manufactured? Did they ever add a better directional mic; like one that can read heartbeats from hundreds of meters awat? Is it mountable on a motorized tripod?
It sounds like there is a signal separation challenge that applied onboard AI could help with.
From "Homemade AI drone software finds people when search and rescue teams can't" https://news.ycombinator.com/item?id=41764486#41775397 :
> {Code-and-Response, Call-for-Code}/DroneAid: https://github.com/Code-and-Response/DroneAid
> "DroneAid: A Symbol Language and ML model for indicating needs to drones, planes" (2010) https://github.com/Code-and-Response/DroneAid
> All but one of the DroneAid Symbol Language Symbols are drawn within upward pointing triangles.
> Is there a simpler set of QR codes for the ground that could be made with sticks or rocks or things the wind won't bend?
I am a solo founder developing this new social media platform
Hey everyone! I am a 19 y/o founder taking a gap year to build Airdel - a social media platform startup. I’ve noticed many of today's social media platforms aren't action-oriented nor optimized for real-time collaboration which is important for people who have personal goals, are working on impactful real-world missions and projects (Humanitarian events e.g California wildfire victims, Emergency healthcare deliveries, climbing Mt.everest, working on research/patents or even just building cool projects). That’s why I’m building a social media platform that connects your needs with people who can help— whether it’s industry stakeholders, professionals, or individuals with a common solution to your problem.
Airdel contains a feed where you can post your mission progress, updates, and achievements, an online directory where you can search for needs and missions to collaborate and partner with others on, and a professional working platform with productivity tools to collaborate on missions until completion. The platform tracks the entire life of your mission/project from start to finish which is recorded on your personal profile. Collaborators can be anonymously reviewed, allowing others to judge trustworthiness for future collaborations.
People who have signed up are in the humanitarian, healthcare, tech, science, business, entertainment, sports and creatives sector needing specific urgent solutions, connections, and resources. These span from NGOs, Institutions, Individuals, Suppliers, Sellers, Organizations and government.
Let me know what you guys think! I will be answering every one of your questions. Shoot!
Landing page: www.getairdel.com Interested in helping out? Contact me shannon@getairdel.com
Best, Shannon
From https://news.ycombinator.com/item?id=33302599 :
> If you label things with #GlobalGoal hashtags, others can find solutions to the very same problems.
The Global Goals are the UN Sustainable Development Goals for 2015-2030.
There are 17 Goals, 169 Targets and 247 Indicators.
There have been social media campaigns with hashtags, so there are already "hames" (hashtaggable names) for the high level goals.
Anyone can implement #hashtags, +tags, WikiWords, or similar. The twitter-text library is open source, for example.
Re: Schema.org/Mission, :Goal, :Objective [...] linked data: https://news.ycombinator.com/item?id=12525141
"Ask HN: Any well funded tech companies tackling big, meaningful problems?" https://news.ycombinator.com/item?id=24412493
From https://github.com/thegreenwebfoundation/carbon.txt/issues/3... re: a proposed carbon.txt:
> Is there a way for sites to sign a claim with e.g. ld-proofs and then have an e.g. an independent auditor - maybe also with a W3C DID Decentralized Identifier - sign to independently verify?
When an auditor confirms a sustainability report, they could sign it and award a signed blockcert.
(Other ideas considered for similar matching, recommendation, and expert-finding objectives: A StackExchange site for SDG Q&A with upvotes and downvotes, )
I saw that you're not interested in (syndicating content for inbound links with) AT protocol? Is it perceived development cost and estimation of inbound traffic?
What differentiates your offering from existing platforms like LinkedIn? How could your mission be achieved with existing solutions?
There is an SDG vocabulary to link problems and solutions with: https://unsceb.org/common-digital-identifiers-sdgs :
> The common identifiers for Sustainable Development Goals, Targets and Indicators are available online. The portal is provided by the UN Library, which also hosts the UN Bibliographic Information System (UN BIS) Taxonomy. The IRIs have also been mapped to the UN Environment SDG Interface Ontology (SDGIO, by UNEP) and to the UN Bibliographic Information System vocabulary, to enable the use of these resources seamlessly in linking documents and data from different sources to Goals, Targets, Indicators, and closely related concepts.
The UN SDG vocabulary: https://metadata.un.org/sdg/?lang=en
"A Knowledge Organization System for the United Nations Sustainable Development Goals" (2021) https://link.springer.com/chapter/10.1007/978-3-030-77385-4_...
Researchers get spiking neural behavior out of a pair of transistors
> Specifically, the researchers operate a transistor under what are called "punch-through conditions." This happens when charges build up in a semiconductor in a way that can allow bursts of current to cross through the transistor even when it's in the off state. Normally, this is considered a problem, so processors are made so that this doesn't occur. But the researchers recognized that a punch-through event would look a lot like the spike of a neuron's activity.
> The team found that, when set up to operate on the verge of punch-through mode, it was possible to use the gate voltage to control the charge build-up in the silicon, either shutting the device down or enabling the spikes of activity that mimic neurons. Adjustments to this voltage could allow different frequencies of spiking. Those adjustments could be made using spikes as well, essentially allowing spiking activity
> [...] All of this simply required standard transistors made with CMOS processes, so this is something that could potentially be put into practice fairly quickly.
ScholarlyArticle: "Synaptic and neural behaviours in a standard silicon transistor" (2025) https://www.nature.com/articles/s41586-025-08742-4
How do these compare to memristors?
Memristors: https://en.wikipedia.org/wiki/Memristor
From "A Chip Has Broken the Critical Barrier That Could Ultimately Begin the Singularity" (2025) https://www.aol.com/chip-broken-critical-barrier-could-17000... :
> Here we report an analogue computing platform based on a selector-less analogue memristor array. We use interfacial-type titanium oxide memristors with a gradual oxygen distribution that exhibit high reliability, high linearity, forming-free attribute and self-rectification. Our platform — which consists of a selector-less (one-memristor) 1 K (32 × 32) crossbar array, peripheral circuitry and digital controller — can run AI algorithms in the analogue domain by self-calibration without compensation operations or pretraining.
Can't these components model spreading activation?
Spreading activation: https://en.wikipedia.org/wiki/Spreading_activation
Memristor is cool, and it is all in one (including memory), but it is not standard CMOS process.
That is whole difference.
Any way, will be sort of CCD matrix to use any of these techs (may be something like modern Flash as storage, or DRAM cell with refresh), but CMOS is very straightforward to produce and to use.
Why I mention CCD - it is analog storage with multiple levels, organized as multiple lines with output line on one side. It could also be used as solid-state circular buffer to access separate cells.
So, these CMOS transistors will work as neuron, but weights will be stored as analog value in CCD.
Is that more debuggable?
Re: the Von Neumann bottleneck, debuggability, and I guess any form of computation in RAM; https://news.ycombinator.com/item?id=42312971
It seems like memristors have been n years away for quite awhile now; maybe like QC.
Wonder if these would work for spiking neural behavior with electronic transistors:
"Breakthrough in avalanche-based amorphization reduces data storage energy 1e-9" (2024) https://news.ycombinator.com/item?id=42318944
Cerebras WSE is probably the fastest RAM bus, though it's not really a bus it's just addressed multiple chips on the same wafer FWIU.
> Is that more debuggable?
I've seen many approaches to computing in my life - optical, mechanical, hydro, even pneumatic. Classic digital based on CMOS is the most universal with huge range of mature debugging instruments.
CMOS digital is so universal, it even worth to pay magnitudes worse power consumption before find best structure, and then, sure use something less debuggable, but with better consumption.
Unfortunately, I don't have enough data to state, which will be better on power, CMOS or memristor. Just now CMOS is mature COTS tech, but memristor is still few years from COTS.
Cerebras, as I know, based on digital CMOS. Just using some tricks to handle near whole wafer space. BTW, Sir Clive Sinclair tried similar approach to make wafer-scale storage, but unsuccessful.
> it's not really a bus it's just addressed multiple chips on the same wafer
I'm electronics engineer, and even have once baked one chip layer on semiconductor practice, so I'm aware about technologies.
As I said before on Sinclair, few companies tried to make new on semiconductor market, and even some have success.
RAM manufacturers for a long time using approach of make multi-chip on one wafer - most RAM chips actually have 4..6 RAMs in one package, but few of them don't pass tests and disabled by fuses, so appear chips with 2 or 4 RAMs enabled and even with odd number of enabled chips.
Looks like Cerebras use similar to RAM manufacturers approach, just for other niche.
Compiler Options Hardening Guide for C and C++
While all of these are very useful, you'll find that a lot of these are already enabled by default in many distributions of the gcc compiler. Sometimes they're embedded in the compiler itself through a patch or configure flag, and sometimes they're added through CFLAGS variables during the compilation of distribution packages. I can only really speak of gentoo, but here's a non-exhaustive list:
* -fPIE is enabled with --enable-default-pie in GCC's ./configure script
* -fstack-protector-strong is enabled with --enable-default-ssp in GCC's ./configure script
* -Wl,-z,relro is enabled with --enable-relro in Binutils' ./configure script
* -Wp,-D_FORTIFY_SOURCE=2, -fstack-clash-protection, -Wl,-z,now and -fcf-protection=full are enabled by default through patches to GCC in Gentoo.
* -Wl,--as-needed is enabled through the default LDFLAGS
For reference, here's the default compiler flags for a few other distributions. Note that these don't include GCC patches:
* Arch Linux: https://gitlab.archlinux.org/archlinux/packaging/packages/pa...
* Alpine Linux: https://gitlab.alpinelinux.org/alpine/abuild/-/blob/master/d...
* Debian: It's a tiny bit more obscure, but running `dpkg-buildflags` on a fresh container returns the following: CFLAGS=-g -O2 -Werror=implicit-function-declaration -ffile-prefix-map=/home/<myuser>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection
From https://news.ycombinator.com/item?id=38505448 :
> There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).
Is there a good reference for comparing these compile-time build flags and their defaults with Make, CMake, Ninja Build, and other build systems, on each platform and architecture?
From https://news.ycombinator.com/item?id=41306658 :
> From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :
>> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
But those are per-arch performance flags, not security flags.
In my experience distributions only patch GCC or modify the package building environment variables to add compiler flags. You can be certain that the compiler flags used in build systems like cmake and meson will be vanilla.
Make adds no additional compiler flags (check the output of "make -n -p"). Neither does Ninja.
Autotools is extremely conservative with compiler flags and will only really add -O2 -g, as well as include paths and defines specified by the developer.
CMake has some default compiler flags, depending on your CMAKE_BUILD_TYPE, mostly affecting optimization, and disabling asserts() with Release (-DNDEBUG). It also has some helpers for precompiled headers and link-time optimizations that enable the relevant flags.
Meson uses practically the same flags as cmake, with the exception of not passing -DNDEBUG unless the developer of the meson build really wants it to.
These are all the relevant build systems for linux packages. I'm not familiar with gn, bazel, and etc. In general, build systems dabble a bit in optimization flags, but pay no mind to hardening.
Show HN: Make SVGs interactive in React with 1 line
Hey HN
I built svggles (npm: interactive-illustrations), a React utility that makes it easy to add playful, interactive SVGs to your frontend.
It supports mouse-tracking, scroll, hover, and other common interactions, and it's designed to be lightweight and intuitive for React devs.
The inspiration came from my time playing with p5.js — I loved how expressive and fun it was to create interactive visuals. But I also wanted to bring that kind of creative freedom to everyday frontend work, in a way that fits naturally into the React ecosystem.
My goal is to help frontend developers make their UIs feel more alive — not just functional, but fun. I also know creativity thrives in community, so it's open source and I’d love to see contributions from artists, developers, or anyone interested in visual interaction.
Links: Website + Docs: svggles.vercel.app
GitHub: github.com/shantinghou/interactive-illustrations
NPM: interactive-illustrations
Let me know what you think — ideas, feedback, and contributions are all welcome
MDN docs > SVG and CSS: https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorials/S...
Python lock files have officially been standardized
What does this mean for pip-tools' requirements.in, Pipfile.lock, pip constraints.txt, Poetry.lock, pyroject.toml, and uv.lock?
I think the plan is to replace all of those.
The PEP has buy in from all the major tools.
It doesn't mention them in alphabetical order.
From last week re: GitHub Dependabot and conda/mamba/micromamba/pixi support:
"github dependabot, meta.yaml, environment.yml, conda-lock.yaml, pixi.lock" https://github.com/regro/cf-scripts/issues/3920#issuecomment... https://github.com/dependabot/dependabot-core/issues/2227#is... incl. links to the source of dependabot
Quantum advantage for learning shallow NNs with natural data distributions
ScholarlyArticle: "Quantum advantage for learning shallow neural networks with natural data distributions" (2025) https://arxiv.org/abs/2503.20879
NewsArticle: "Google Researchers Say Quantum Theory Suggests a Shortcut for Learning Certain Neural Networks" (2025) https://thequantuminsider.com/2025/03/31/google-researchers-... :
> Using this model, [Quantum Statistical Query (QSQ) learning,] the authors design a two-part algorithm. First, the quantum algorithm finds the hidden period in the function using a modified form of quantum Fourier transform — a core capability of quantum computers. This step identifies the unknown weight vector that defines the periodic neuron. In the second part, it applies classical gradient descent to learn the remaining parameters of the cosine combination. The algorithm is shown to require only a polynomial number of steps, compared to the exponential cost for classical learners. [...]
> The researchers carefully address several technical challenges. For one, real-valued data must be discretized into digital form to use in a quantum computer.
Quantum embedding:
> Another way to put this: real-world numbers must be converted into digital chunks so a quantum computer can process them. But naive discretization can lose the periodic structure, making it impossible to detect the right signal. The authors solve this by designing a pseudoperiodic discretization. This approximates the period well enough for quantum algorithms to detect it.
> They also adapt an algorithm from quantum number theory called Hallgren’s algorithm to detect non-integer periods in the data. While Hallgren’s method originally worked only for uniform distributions, the authors generalize it to work with “sufficiently flat” non-uniform distributions like Gaussians and logistics, as long as the variance is large enough.
There is not yet a Wikipedia article on (methods of) "Quantum embedding".
How many qubits are necessary to roll 2, 8, or 6-sided quantum dice?
Embedding (mathematics) https://en.wikipedia.org/wiki/Embedding
Embedding (machine learning) https://en.wikipedia.org/wiki/Embedding_(machine_learning)
/? quantum embedding review: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=qua...
Continuous embedding: https://en.wikipedia.org/wiki/Continuous_embedding
Continuous-variable quantum information; https://en.wikipedia.org/wiki/Continuous-variable_quantum_in...
Symmetry between up and down quarks is more broken than expected
BTW isospin is actually how many up vs down quarks they are. It's not a fundamental property like spin or charge.
It's an old term that was created before they knew that up and down quarks existed.
Personally I find the term outdated because there are 4 other quarks, and isospin only talks about two of them.
Isospin symmetry: https://en.wikipedia.org/wiki/Isospin #History :
> Isospin is also known as isobaric spin or isotopic spin.
Supersymmetry: https://en.wikipedia.org/wiki/Supersymmetry
Does the observed isospin asymmetry disprove supersymmetry, if isospin symmetry is an approximate symmetry?
Stochastic Reservoir Computers
"Stochastic reservoir computers" (2025) https://www.nature.com/articles/s41467-025-58349-6 :
> Abstract: [...] This allows the number of readouts to scale exponentially with the size of the reservoir hardware, offering the advantage of compact device size. We prove that classes of stochastic echo state networks form universal approximating classes. We also investigate the performance of two practical examples in classification and chaotic time series prediction. While shot noise is a limiting factor, we show significantly improved performance compared to a deterministic reservoir computer with similar hardware when noise effects are small.
Glutamate Unlocks Brain Cell Channels to Enable Thinking and Learning
ScholarlyArticle: "Glutamate gating of AMPA-subtype iGluRs at physiological temperatures" (2025) https://www.nature.com/articles/s41586-025-08770-0 ; CryoEM
From https://news.ycombinator.com/item?id=39836127 :
> High glutamine-to-glutamate ratio predicts the ability to sustain motivation [...]
> MSG [...]
> So, IIUC, when you feed LAB glutamate, you get GABA?
Pixar One Thirty
Tessellation #History, #Overview: https://en.wikipedia.org/wiki/Tessellation :
> More formally, a tessellation or tiling is a cover of the Euclidean plane by a countable number of closed sets, called tiles, such that the tiles intersect only on their boundaries.
Category:Tileable textures: https://commons.wikimedia.org/wiki/Category:Tileable_texture...
Can PxrRoundCube be called from RenderManForBlender, with BlenderMCP? https://github.com/prman-pixar/RenderManForBlender
Texture synthesis > Methods: https://en.wikipedia.org/wiki/Texture_synthesis#Methods
Khan Academy > Computing > Pixar in a Box > Unit 8: Patterns: https://www.khanacademy.org/computing/pixar/pattern
Matrix Calculus (For Machine Learning and Beyond)
> The class involved numerous example numerical computations using the Julia language, which you can install on your own computer following these instructions. The material for this class is also located on GitHub at https://github.com/mitmath/matrixcalc
Mathematical Compact Models of Advanced Transistors [pdf]
This is from 2018, anyone in the field know if it's still state of the art or a historic curiosity? I know that we've started using euv since then which seems like it would change things.
State of the art in transistors?
- "Researchers get spiking neural behavior out of a pair of [CMOS] transistors" (2025) https://news.ycombinator.com/item?id=43503644
- Memristors
- Graphene-based transistors
EUV and nanolithography?
SOTA alternatives to EUV for nanolithography include NIL nanoimprint lithography (at 10-14nm at present fwiu), nanoassembly methods like atomic/molecular deposition and optical tweezers, and a new DUV solid-state laser light source at 193nm.
How to report a security issue in an open source project
Security.txt is a standard for sharing vuln disclosure information; /.well-known/security.txt or /security.txt .
security.txt: https://en.wikipedia.org/wiki/Security.txt
Responsible disclosure -> CVD: Coordinated Vulnerability Disclosure: https://en.wikipedia.org/wiki/Coordinated_vulnerability_disc...
OWASP Vulnerability Disclosure Cheat Sheet: https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability...
Publishers trial paying peer reviewers – what did they find?
> USD $250
How much deep research does $250 yield by comparison?
Knowledge market > Examples; Google Answers, Yahoo Answers, https://en.wikipedia.org/wiki/Knowledge_market#Examples
I'm not sure why one would compare reviews by acknowledged experts in a field with stuff written by anonymous randos, and it seems highly unlikely that anyone with the appropriate qualifications would be lurking on some mechanical turk-like site.
I'm also deeply suspicious of the confidentiality of anything sent to one of those sites.
However this does suggest the idea that a high-powered university in a low-income country might be able to cut a deal to provide reviewing services...
You can get 50 reviews on Fiverr for that price!
Tracing the thoughts of a large language model
XAI: Explainable artificial intelligence: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Inside arXiv–The Most Transformative Platform in All of Science
ArXiv: https://en.wikipedia.org/wiki/ArXiv
ArXiv accepts .ps (PostScript), .tex (LaTeX source), and .pdf (PDF) ScholarlyArticle uploads.
ArXiv docs > Formats for text of submission: https://info.arxiv.org/help/submit/index.html#formats-for-te...
The internet and the web are the most transformative platforms in all of science, though.
Chimpanzees act as 'engineers', choosing materials to make tools
Tool use by non-humans > Primates > Chimpanzees and bonobos: https://en.wikipedia.org/wiki/Tool_use_by_non-humans#Chimpan...
Accessible open textbooks in math-heavy disciplines
"BookML: automated LaTeX to bookdown-style HTML and SCORM, powered by LaTeXML" https://vlmantova.github.io/bookml/
LaTeXML: https://en.wikipedia.org/wiki/LaTeXML :
LaTeXML emits XML from a parsing of LaTex with Perl.
SCORM is a standard for educational content in ZIP packages which is supported by Moodle, ILIAS, Sakai, Canvas, and a number of other LMS Learning Management Systems.
SCORM: https://en.wikipedia.org/wiki/Sharable_Content_Object_Refere...
xAPI (aka Experience API, aka TinCan API) is a successor spec to SCORM for event messages to LRS Learning Record Stores. Like SCORM, xAPI was granted by ADL.
re: xAPI, schema.org/Action, and JSON-LD: https://github.com/RusticiSoftware/TinCanSchema/issues/7
schema.org/Action describes potential actions: https://schema.org/docs/actions.html
For example, from the Schema.org "Potential Actions" doc: https://schema.org/docs/actions.html :
{
"@context": "https://schema.org",
"@type": "Movie",
"name": "Footloose",
"potentialAction": {
"@type": "WatchAction"
}
}
That could be a syllabus.ActionTypes include: BuyAction, AssessAction > ReviewAction,
Schema.org > "Full schema hierarchy" > [Open hierarchy] > Action and rdfs:subClassOf subclasses thereof: https://schema.org/docs/full.html
What Linked Data should [math textbook] publishing software include when generating HTML for the web?
https://schema.org/CreativeWork > Book, Audiobook, Article > ScholarlyArticle, Guide, HowTo, Blog, MathSolver
The schema.org Thing > CreativeWork LearningResource RDFS class has the :assesses, :competencyRequired, :educationalLevel, :educationalAlignment, and :teaches RDFS properties; https://schema.org/LearningResource
You can add bibliographic metadata and curricular Linked Data to [OER LearningResource] HTML with schema.org classes and properties as JSON-LD, RDFa, or Microdata.
The schema.org/about property has a domain which includes CreativeWork and a range which includes Thing, so a :CreativeWork is :about a :Thing which could be a subclass of :CreativeWork.
.
I work with MathJax and LaTeX in notebooks a bit, and have generated LaTeX and then PDF with Sphinx and texlive like the ReadTheDocs docker container which already has the multiple necessary GB of LaTeX installed to render a README.rst as PDF without pandoc:
The Jupyter Book docs now describe how that works.
Jupyter Book docs > Customize LaTeX via Sphinx: https://jupyterbook.org/en/stable/advanced/pdf.html#customiz...
How to build the docs with the readthedocs docker image onesself: https://github.com/jupyter-book/jupyter-book/issues/991
ReadTheDocs > Dev > Design > Build Images > Time required to install languages at build time [with different package managers with varying performance] https://docs.readthedocs.com/dev/latest/design/build-images....
The jupyter-docker-stacks, binderhub, and condaforge/miniforge3 images build with micromamba now IIRC.
condaforge/miniforge3: https://hub.docker.com/r/condaforge/miniforge3
Recently, I've gotten into .devcontainers/devcontainers.json; which allows use of one's own Dockerfile or a preexisting docker image and installs LSP and vscode on top, and then runs the onCreateCommand, postStartCommand
A number of tools support devcontainer.json: https://containers.dev/supporting
Devcontainers could be useful for open textbooks in math-heavy disciplines; so that others can work within, rebuild, and upgrade the same container env used to build the textbook.
Re: MathJax, LaTeX, and notebooks:
To left-align a LaTeX expression in a (Jupyter,Colab,VScode,) notebook wrap the expression with single dollar signs. To center-align a LaTeX expression in a notebook, wrap it with double dollar signs:
$ \alpha_{\beta_1} $
$$ \alpha_{\beta_2} $$
Textbooks, though? Interactive is what they want.How can we make textbooks interactive?
It used to be that textbooks were to be copied down from; copy by hand from the textbook.
To engage and entertain this generation.
ManimCE, scriptable 3d simulators with test assertions, Thebelab,
Jupyter Book docs > "Launch into interactive computing interfaces" > BinderHub ( https://mybinder.org ), JupyterHub, Colab, Deepnote: https://jupyterbook.org/en/stable/interactive/launchbuttons....
JupyterLite-xeus builds a jupyterlite static site from an environment.yml; such that e.g. the xeus-python kernel and other packages are compiled to WebAssembly (WASM) so that you can run Jupyter notebooks in a browser without a server:
repo2jupyterlite works like repo2docker, which powers BinderHub, which generates a container with a current version of Jupyter installed after building the container according to one or more software dependency requirement specification files in /.binder or the root of the repo.
repo2jupyter: https://github.com/jupyterlite/repo2jupyterlite
jupyterlite-xeus: https://jupyterlite-xeus.readthedocs.io/en/latest/
Getting hit by lightning is good for some tropical trees
Also re: lightning and living things, from https://news.ycombinator.com/item?id=43044159 :
> "Gamma radiation is produced in large tropical thunderstorms" (2024)
> "Gamma rays convert CH4 to complex organic molecules [like glycine,], may explain origin of life" (2024)
Harnessing Quantum Computing for Certified Randomness
ScholarlyArticle: "Certified randomness using a trapped-ion quantum processor" (2025) https://www.nature.com/articles/s41586-025-08737-1
Re: different - probably much less expensive - approaches to RNG;
From "Cloudflare: New source of randomness just dropped" (2005) https://news.ycombinator.com/item?id=43321797 :
> Another source of random entropy better than a wall of lava lamps:
>>> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
100Gbit/s is faster than qualifying noise from a 56 qubit quantum computer?
>> google/paranoid_crypto.lib.randomness_tests
There's yet no USB or optical interconnect RNG based on quantum vacuum fluctuations, though.
Building and deploying a custom site using GitHub Actions and GitHub Pages
I just set up an apt repo of uv packages using GitHub Pages via Actions and wrote up some notes here: https://linsomniac.com/post/2025-03-18-building_and_publishi...
Looks like Simon and I were working on very similar things simultaneously.
There should be a "Verify signature of checksums and Verify checksums" at "Download latest uv binary".
GitHub has package repos; "GitHub Packages" for a few package formats, and OCI artifacts. https://docs.github.com/en/packages/working-with-a-github-pa...
OCI Artifacts with labels include signed Container images and signed Packages.
Packages can be hosted as OCI image repository artifacts with signatures; but package managers don't natively support OCI image stores, though many can install packages hosted at GitHub Packages URLs which are referenced by a signed package repository Manifest on a GitHub Pages or GitLab Pages site (e.g. with CF DNS and/or CloudFlare Pages in front)
>There should be a "Verify signature of checksums and Verify checksums"
I get it, but verifying checksums of downloads via https from github releases, downloaded across github's architecture (admittedly Azure) to githubs's runners seems like overkill. Especially as I don't see any signatures of the checksums, so the checksums are being retrieved via that same infrastructure with the same security guarantees. What am I missing?
If downloading over https and installing were enough, we wouldn't need SLSA, TUF, or sigstore.
.deb and .rpm Package managers typically reference a .tar.gz with a checksum; commit that to git signed with a GPG key; and then the package is signed by a (reproducible) CI build with the signing key for that repo's packages.
conda-forge can automatically send a Pull Request to update the upstream archive URL and cached checksum in the package manifest when there is a new upstream package.
Actually, the conda-forge bot does more than that;
From https://github.com/regro/cf-scripts/issues/3920#issuecomment... re: (now github) dependabot maybe someday scanning conda environment.yml, conda-lock.yml, and/or feedstock meta.yml:
> So the bot here does a fair bit more than update dependencies and/or versions.
> We have globally pinned ABIs which have to migrated in a specific order. We also use the bot here to start the migrations, produce progress / error outputs on the status page, and then close the migrations. The bot here also does specific migrations of dependencies that have been renamed, recipe maintenance, and more.
Dependabot scans for vulnerable software package versions in software dependency specification documents for a number of languages and package formats, when you git push to a repo which has dependabot configured in their dependabot.yml:
From the dependabot.yml docs: https://docs.github.com/en/code-security/dependabot/working-... :
> Define one package-ecosystem element for each package manager that you want Dependabot to monitor for new versions
Dependabot supports e.g. npm, pip, gomod, cargo, docker, github-actions, devcontainers,
SBOM tools have additional methods of determining which packages are installed on a specific server or container instance.
Static sites are sometimes build and forget projects that also need regular review of the statically-compiled-in dependencies for known vulns.
E.g. jupyterlite-xeus builds WASM static sites from environment.yml; though Hugo is probably much faster.
AI Supply Chain Attack: How Malicious Pickle Files Backdoor Models
From "Insecurity and Python Pickles" (2024) https://news.ycombinator.com/item?id=39685128 :
> There should be a data-only pickle serialization protocol (that won't serialize or deserialize code).
> How much work would it be to create a pickle protocol that does not exec or eval code?
"Title: Pickle protocol version 6: skipcode pickles" https://discuss.python.org/t/create-a-new-pickle-protocol-ve...
I have to agree with Chris Angelico there:
> Then the obvious question is: Why? Why use pickle? The most likely answer is “because <X> can’t represent what I need to transmit”, but for that to be at all useful to your proposal, you need to show examples that won’t work in well-known safe serializers.
Code in packages should be signed.
Code in pickles should also be signed.
I have no need for the pickle module now, but years ago thought there might have been safer way to read data that was already in pickles.
For backwards compatibility, skipcode=False must be the default,
were someone to implement a pickle str parser that doesn't eval code.
JS/ES/TS Map doesn't map to JSON.
Pickle still is good for custom objects (JSON loses methods and also order), Graphs & circular refs (JSON breaks), Functions & lambdas (Essential for ML & distributed systems) and is provided out of box.
We're contemplating protocols that don't evaluate or run code; that rules out serializing functions or lambdas (i.e., code).
Custom objects in Python don't have "order" unless they're using `__slots__` - in which case the application already knows what they are from its own class definition. Similarly, methods don't need to be serialized.
A general graph is isomorphic to a sequence of nodes plus a sequence of vertex definitions. You only need your own lightweight protocol on top.
Because globals(), locals(), Classes and classInstances are backed by dicts, and dicts are insertion ordered in CPython since 3.6 (and in the Python spec since 3.7), object attributes are effectively ordered in Python.
Object instances with __slots__ do not have a dict of attributes.
__slots__ attributes of Python classes are ordered, too.
(Sorting and order; Python 3 objects must define at least __eq__ and __lt__ in order to be sorted. @functools.total_ordering https://docs.python.org/3/library/functools.html#functools.t... )
Are graphs isomorphic if their nodes and edges are in a different sequence?
assert dict(a=1, b=2) == dict(b=2, a=1)
from collections import OrderedDict as odict
assert dict(a=1, b=2) != dict(b=2, a=1)
To crytographically sign RDF in any format (XML, JSON, JSON-LD, RDFa), a canonicalization algorithm is applied to normalize the input data prior to hashing and cryptographically signing. Like Merkle hashes of tree branches, a cryptographic signature of a normalized graph is a substitute for more complete tests of isomorphism.RDF Dataset Canonicalization algorithm: https://w3c-ccg.github.io/rdf-dataset-canonicalization/spec/...
Also, pickle stores the class name to unpickle data into as a (variously-dotted) str. If the version of the object class is not in the class name, pickle will unpickle data from appA.Pickleable into appB.Pickleable (or PickleableV1 into PickleableV2 objects, as long as PickleableV2=PickleableV1 is specified in the deserializer).
So do methods need to be pickled? No for security. Yes because otherwise the appB unpickled data is not isomorphic with the pickled appA.Pickleable class instances.
One Solution: add a version attribute on each object, store it with every object, and discard it before testing equality by other attributes.
Another solution: include the source object version in the class name that gets stored with every pickled object instance, and try hard to make sure the dest object is the same.
Experimental test of the nonlocal energy alteration between two quantum memories
ScholarlyArticle: "Test of Nonlocal Energy Alteration between Two Quantum Memories" (2025) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
Notebooks as reusable Python programs
i wanted to like marimo, but the best notebook interface i've tried so far is vscode's interactive window [0]. the important thing is that it's a python file first, but you can divide up the code into cells to run in the jupyter kernel either all at once or interactively.
0: https://code.visualstudio.com/docs/python/jupyter-support-py
Spyder also has these, possibly for longer than vscode [0]. I don't know who had this idea first but I remember some vim plugins doing that long ago, so maybe the vim community?
[0] https://docs.spyder-ide.org/current/panes/editor.html#code-c...
Jupytext docs > The percent format: https://github.com/mwouts/jupytext/blob/main/docs/formats-sc... :
# %% [markdown]
# Another Markdown cell
# %%
# This is a code cell
class A():
def one():
return 1
# %% Optional title [cell type] key="value"
MyST Markdown has: https://mystmd.org/guide/notebooks-with-markdown : ```{code-cell} LANGUAGE
:key: value
CODE TO BE EXECUTED
```
And : ---
kernelspec:
name: javascript
display_name: JavaScript
---
# Another markdown cell
```{code-cell} javascript
// This is a code cell
console.log("hello javascript kernel");
```
But also does not store the outputs in the markdown.Thanks, this is a good read. I did not know MyST, it is very cool.
The vim plugin I was talking about was vim-slime [0], which seems to date from 2007 and does have regions delimited with #%%.
Slime comes from Emacs originally, but I could not find if the original Emacs slime has regions.
Matlab also has those, which they call code sections [1]. Hard to find when they were introduced. Maybe 2021, but I suspect older.
None of those stores gge output of the command.
[0] https://vimawesome.com/plugin/vim-slime
[1] https://www.mathworks.com/help/matlab/matlab_prog/create-and...
The Spyder docs list other implementations of the percent format for notebooks as markdown and for delimiting runnable blocks within source code:
# %%
"^# %%"
Org-mode was released in 2003:
https://en.wikipedia.org/wiki/Org-modeOrg-mode supports code blocks: https://orgmode.org/manual/Structure-of-Code-Blocks.html :
#+BEGIN_SRC <language> and
#+END_SRC.
Literate programming; LaTeX:
https://en.wikipedia.org/wiki/Literate_programmingNotebook interface; Markdown + MathTeX: https://en.wikipedia.org/wiki/Notebook_interface ;
$ \delta_{2 3 4} = 5 $
$$ \Delta_\text{3 4 5} = 56 $$Show HN: Aiopandas – Async .apply() and .map() for Pandas, Faster API/LLMs Calls
Can this be merged into pandas?
Pandas does not currently install tqdm by default.
pandas-dev/pandas//pyproject.toml [project.optional-dependencies] https://github.com/pandas-dev/pandas/blob/8943c97c597677ae98...
Dask solves for various adjacent problems; IDK if pandas, dask, or dask-cudf would be faster with async?
Dask docs > Scheduling > Dask Distributed (local) https://docs.dask.org/en/stable/scheduling.html#dask-distrib... :
> Asynchronous Futures API
Dask docs > Deploy Dask Clusters; local multiprocessing poll, k8s (docker desktop, podman-desktop,), public and private clouds, dask-jobqueue (SLURM,), dask-mpi: https://docs.dask.org/en/stable/deploying.html#deploy-dask-c...
Dask docs > Dask DataFrame: https://docs.dask.org/en/stable/dataframe.html :
> Dask DataFrames are a collection of many pandas DataFrames.
> The API is the same. The execution is the same.
> [concurrent.futures and/or @dask.delayed]
tqdm.dask: https://tqdm.github.io/docs/dask/#tqdmdask .. tests/tests_pandas.py: https://github.com/tqdm/tqdm/blob/master/tests/tests_pandas.... , tests/tests_dask.py: https://github.com/tqdm/tqdm/blob/master/tests/tests_dask.py
tqdm with dask.distributed: https://github.com/tqdm/tqdm/issues/1230#issuecomment-222379... , not yet a PR: https://github.com/tqdm/tqdm/issues/278#issuecomment-5070062...
dask.diagnostics.progress: https://docs.dask.org/en/stable/diagnostics-local.html#progr...
dask.distributed.progress: https://docs.dask.org/en/stable/diagnostics-distributed.html...
dask-labextension runs in JupyterLab and has a parallel plot visualization of the dask task graph and progress through it: https://github.com/dask/dask-labextension
dask-jobqueue docs > Interactive Use > Viewing the Dask Dashboard: https://jobqueue.dask.org/en/latest/clusters-interactive.htm...
https://examples.dask.org/ > "Embarrassingly parallel Workloads" tutorial re: "three different ways of doing this with Dask: dask.delayed, concurrent.Futures, dask.bag": https://examples.dask.org/applications/embarrassingly-parall...
Thank you for the input! To be honest, I don’t use Dask often, and as a regular Pandas user, I don’t feel the most qualified to comment—but here we go.
Can this be merged into Pandas?
I’d be honored if something I built got incorporated into Pandas! That said, keeping aiopandas as a standalone package has the advantage of working with older Pandas versions, which is useful for workflows where upgrading isn’t feasible. I also can’t speak to the downstream implications of adding this directly into Pandas.
Pandas does not install tqdm by default.
That makes sense, and aiopandas doesn’t require tqdm either. You can pass any class with __init__, update, and close methods as the tqdm argument, and it will work the same. Keeping dependencies minimal helps avoid unnecessary breakage.
What about Dask?
I’m not a regular Dask user, so I can’t comment much on its internals. Dask already supports async coroutines (Dask Async API), but for simple async API calls or LLM requests, aiopandas is meant to be a lightweight extension of Pandas rather than a full-scale parallelization framework. If you’re already using Dask, it probably covers most of what you need, but if you’re just looking to add async support to Pandas without additional complexity, aiopandas might be a more lightweight option.
Fair benchmarks would justify merging aiopandas into pandas. Benchmark grid axes: aiopandas, dtype_backend="pyarrow", dask-cudf
pandas pyarrow docs: https://pandas.pydata.org/docs/dev/user_guide/pyarrow.html
/? async pyarrow: https://www.google.com/search?q=async+pyarrow
/? repo:apache/arrow async language:Python : https://github.com/search?q=repo%3Aapache%2Farrow+async+lang... :
test_flight_async.py https://github.com/apache/arrow/blob/main/python/pyarrow/tes...
pyarrow/src/arrow/python/async.h: https://github.com/apache/arrow/blob/main/python/pyarrow/src... : "Bind a Python callback to an arrow::Future."
--
dask-cudf: https://docs.rapids.ai/api/dask-cudf/stable/ :
> Neither Dask cuDF nor Dask DataFrame provide support for multi-GPU or multi-node execution on their own. You must also deploy a dask.distributed cluster to leverage multiple GPUs. We strongly recommend using Dask-CUDA to simplify the setup of the cluster, taking advantage of all features of the GPU and networking hardware.
cudf.pandas > FAQ > "When should I use cudf.pandas vs using the cuDF library directly?" https://docs.rapids.ai/api/cudf/stable/cudf_pandas/faq/#when... :
> cuDF implements a subset of the pandas API, while cudf.pandas will fall back automatically to pandas as needed.
> Can I use cudf.pandas with Dask or PySpark?
> [Not at this time, though you can change the dask df to e.g. cudf, which does not implement the full pandas dataframe API]
--
dask.distributed docs > Asynchronous Operation; re Tornado or asyncio: https://distributed.dask.org/en/latest/asynchronous.html#asy...
--
tqdm.dask, tqdm.notebook: https://github.com/tqdm/tqdm#ipythonjupyter-integration
from tqdm.notebook import trange, tqdm
for n in trange(10):
time.sleep(1)
--But then TPUs instead of or in addition to async GPUs;
TensorFlow TPU docs: https://www.tensorflow.org/guide/tpu
Optimization by Decoded Quantum Interferometry
ScholarlyArticle: "Optimization by Decoded Quantum Interferometry" (2025) https://arxiv.org/abs/2408.08292
NewsArticle: "Quantum Speedup Found for Huge Class of Hard Problems" (2025) https://www.quantamagazine.org/quantum-speedup-found-for-hug...
High-fidelity entanglement between telecom photon and room-temp quantum memory
ScholarlyArticle: "High-fidelity entanglement between a telecom photon and a room-temperature quantum memory" (2025) https://arxiv.org/html/2503.11564v1
NewsArticle: "Scientists Achieve Telecom-Compatible Quantum Entanglement with Room-Temperature Memory" (2025) https://thequantuminsider.com/2025/03/19/scientists-achieve-...
Sound that can bend itself through space, reaching only your ear in a crowd
Which quantum operators can be found in [ultrasonic acoustic] wave convergences?
Surround sound systems must be calibrated in order to place the sweet spots of least cacophony.
There also exist ultrasonic scalpels that enable noninvasive subcutaneous surgical procedures that function by wave guiding to cause convergence.
"Functional ultrasound through the skull" (2025) https://news.ycombinator.com/item?id=42086408
"Neurosurgeon pioneers Alzheimer's, addiction treatments using ultrasound [video]" (2024) https://news.ycombinator.com/item?id=39556615
Did the Particle Go Through the Two Slits, or Did the Wave Function?
According to modern QFT, there are no particles except as an approximation. There are no fields except as mathematical formalisms. There's no locality. There is instead some kind of interaction of graph nodes, representing quantum interactions, via "entanglement" and "decoherence".
In this model, there are no "split particle" paradoxes, because there are no entities that resemble the behavior of macroscopic bodies, with our intuitions about them.
Imagine a Fortran program, with some neat index-based FOR loops, and some per-element computations on a bunch of big arrays. When you look at its compiled form, you notice that the neat loops are now something weird, produced by automatic vectorization. If you try to find out how it runs, you notice that the CPU not only has several cores that run parts of the loop in parallel, but the very instructions in one core run out of order, while still preserving the data dependency invariants.
"But did the computation of X(I) run before or after the computation of X(I+1)?!", you ask in desperation. You cannot tell. It depends. The result is correct though, your program has no bugs and computes what it should. It's counter-intuitive, but the underlying hardware reality is counter-intuitive. It's not illogical or paradoxical though.
This is incorrect. There are particles. They are excitations in the field.
There still is the 'split particle paradox' because QFT does not solve the measurement problem.
The 'some kind of interaction of graph nodes' by which I am guessing you are referring to Feynman diagrams are not of a fundamental nature. They are an approximation known as 'perturbation theory'.
I think what they must be referring to is the fact that particles are only rigorously defined in the free theory. When coupling is introduced, how the free theory relates to the coupled theory depends on heuristic/formal assumptions.
We're leaving my area of understanding, but I believe Haag's theorem shows that the naïve approach, where the interacting and free theories share a Hilbert space, completely fails -- even stronger than that, _no_ Hilbert space could even support an interacting QFT (in the ways required by scattering theory). This is a pretty strong argument against the existence of particles except as asymptotic approximations.
Since we don't have consensus on a well-defined, non-perturbative gauge theory, mathematically speaking it's difficult to make any firm statements about what states "exist" in absolute. (I'm certain that people working on the various flavours of non-perturbative (but still heuristic) QFT -- like lattice QFT -- would have more insights about the internal structure of non-asymptotic interactions.)
Though it doesn't resolve whether a "quanta" is a particle or a measurable convergence of waves, Electrons and Photons are observed with high speed imaging.
"Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054
There exist single photon emitters and single photon detectors.
Qualify that there are single photons if there are single photon emitters:
Single-photon source: https://en.wikipedia.org/wiki/Single-photon_source
QFT is not yet reconciled with (n-body) [quantum] gravity, which it has 100% error in oredicting. random chance. TOD
IIRC, QFT cannot explain why superfluid helium walks up the sides of a container against gravity, given the mass of each particle/wave of the superfluid and of the beaker and the earth, sun, and moon; though we say that gravity at any given point is the net sum of directional vectors acting upon said given point, or actually gravitational waves with phase and amplitude.
You said "gauge theory",
"Topological gauge theory of vortices in type-III superconductors" https://news.ycombinator.com/item?id=41803662
From https://news.ycombinator.com/context?id=43081303 .. https://news.ycombinator.com/item?id=43310933 :
> Probably not gauge symmetry there, then.
Generate impressive-looking terminal output, look busy when stakeholders walk by
The amount of time I spent getting asciifx and agg to work with syntax highlighting because IPython now has only Python Prompt Toolkit instead of deadline.
In order to leave Python coding demo tut GIF/MP4 on their monitor(s) at conferences or science fairs.
stemkiosk arithmetic in Python GIF v0.1.2: https://github.com/stemkiosk/stemkiosk/blob/e8f54704c6de32fb...
Ask HN: Project Management Class Recommendations?
I need to improve my skills in defining project goals, steps, and timelines and aligning teams around them.
Can anyone recommend some of their favorite online courses in this area? Particularly in technical environments.
Justified methods?
"How did software get so reliable without proof? (1996) [pdf]" https://news.ycombinator.com/item?id=42425617
CPM: Critical Path Method, resource scheduling, agile complexity estimation, planning poker, WBS Work Breakdown Structure: https://news.ycombinator.com/item?id=33582264#33583666
Project network: https://en.wikipedia.org/wiki/Project_network
Project planning: https://en.wikipedia.org/wiki/Project_planning
PERT: Program evaluation and review technique: https://en.m.wikipedia.org/wiki/Program_evaluation_and_revie...
Project management > Approaches of project management,: https://en.wikipedia.org/wiki/Project_management
PMI: Project Management Institute > Certifications: https://en.wikipedia.org/wiki/Project_Management_Institute#C...
PMBOK: Project Management Body of Knowledge > Contents: https://en.wikipedia.org/wiki/Project_Management_Body_of_Kno...
Agile software development > Methods: TDD, https://en.wikipedia.org/wiki/Agile_software_development
/? site:github.com inurl:awesome project management ; Books, Courses https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw...
Class Central > Project Management Courses and Certifications: https://www.classcentral.com/subject/project-management
Which processes scale without reengineering, once DevSecOps is automated with GitOps?
Brooks's law (observation) > Exceptions and possible solutions: https://en.wikipedia.org/wiki/Brooks%27s_law
Business process re-engineering > See also: https://en.wikipedia.org/wiki/Business_process_re-engineerin... :
Learning agenda: https://en.wikipedia.org/wiki/Learning Agenda , CLO
Agendas; Robert's Rules of Order has a procedure for motioning to add something to a meeting agenda.
Robert's Rules of Order: https://en.wikipedia.org/wiki/Robert%27s_Rules_of_Order
3 Questions; a status report to be ready with for team chat async or time bound: Since, Before, Obstacles
Stand-up meeting > Three questions: https://en.wikipedia.org/wiki/Stand-up_meeting#Three_questio...
Recursion kills: The story behind CVE-2024-8176 in libexpat
> Please leave recursion to math and keep it out of (in particular C) software: it kills and will kill again.
This is just nonsense. The issue is doing an unbounded amount of resource consuming work. Don't do an unbounded amount of resource consuming work, regardless of whether that work is expressed in a recursive or iterative form.
Any recursive function can be transformed into a tail recursive form, exchanging stack allocation for heap allocation. And any tail recursive program can be transformed into a loop (a trampoline). It's really not the language construct that is the issue here.
> Any recursive function can be transformed into a tail recursive form, exchanging stack allocation for heap allocation.
Can't all recursive functions be transformed to stack-based algorithms? And then, generally, isn't there a flatter resource consumption curve for stack-based algorithms, unless the language supports tail recursion?
E.g. Python has sys.getrecursionlimit() == 1000 by default, RecursionError, collections.deque; and "Performance of the Python 3.14 tail-call interpreter": https://news.ycombinator.com/item?id=43317592
> Can't all recursive functions be transformed to stack-based algorithms?
Yes, you can explicitly manage the stack. You still consume memory at the same rate, but now it is a heap-allocated stack that is programmatically controlled, instead of a stack-allocated stack that is automatically controlled.
The issue is either:
* whatever the parser is doing is already tail recursive, but not expressed in C in a way that doesn't consume stack. In this case it's trivial, but perhaps tedious, to convert it to a form that doesn't use unbounded resources.
* whatever the parser is doing uses the intermediate results of the recursion, and hence is not trivially tail recursive. In this case any reformulation of the algorithm using different language constructs will continue to use unbounded resources, just heap instead of stack.
It's not clear how the issue was fixed. The only description in the blog post is "It used the same mechanism of delayed interpretation" which suggests they didn't translate the existing algorithm to a different form, but changed the algorithm itself to a different algorithm. (It reads like they went from eager to lazy evaluation.)
Either way, it's not recursion itself that is the problem.
> You still consume memory at the same rate, but now it is a heap-allocated stack that is programmatically controlled, instead of a stack-allocated stack that is automatically controlled
To be precise; with precision:
In C (and IIUC now Python, too), a stack frame is created for each function call (unless tail call optimization is applied by the compiler).
To avoid creating an unnecessary stack frame for each recursive function call, instead create a collections.deque and add traversed nodes to the beginning or end enqueued for processing in a loop within one non-recursive function.
Is Tail Call Optimization faster than collections.deque in a loop in one function scope stack frame?
Yes, that is basically it. A tail call should be the same jump instruction as a loop, but performance really depends on the language implementation and is hard to make general statements about.
The Lost Art of Logarithms
Notes from "How should logarithms be taught?" (2021) https://news.ycombinator.com/item?id=28519356 re: logarithms in the Python standard library, NumPy, SymPy, TensorFlow, PyTorch, Wikipedia
Jupyter JEP: AI Representation for tools that interact with notebooks
(TIL about) MCP: Model Context Protocol; https://modelcontextprotocol.io/introduction
MCP Specification: https://spec.modelcontextprotocol.io/specification/draft/
/? Model Context Protocol: https://hn.algolia.com/?q=Model+Context+Protocol
From https://github.com/jupyter/enhancement-proposals/pull/129#is... :
> Would a JSON-LD 'ified nbformat and `_repr_jsonld_()` solve for this too?
There's a (closed) issue to: "Add JSONLD @context to the top level .ipynb node nbformat#44"
Datoviz: High-Performance GPU Scientific Visualization Library with Vulkan
"Datoviz: high-performance GPU scientific data visualization C/C++/Python library" https://github.com/datoviz/datoviz
> In the long term, Datoviz will mostly be used as a VisPy 2.0 backend.
ctypes bindings for Python
Matplotlib and MATLAB colormaps
0.4: WebGPU, Jupyter
... jupyter-xeus and JupyterLite; https://github.com/jupyter-xeus/xeus
From https://news.ycombinator.com/item?id=43201706 :
> jupyter-xeus supports environment.yml with jupyterlite with packages from emscripten-forge [a Quetz repo built with rattler-build]
> emscripten-forge src: https://github.com/emscripten-forge/recipes/tree/main/recipe... web: https://repo.mamba.pm/emscripten-forge
High-performance computing, with much less code
> The researchers implemented a scheduling library with roughly 2,000 lines of code in Exo 2, encapsulating reusable optimizations that are linear-algebra specific and target-specific (AVX512, AVX2, Neon, and Gemmini hardware accelerators). This library consolidates scheduling efforts across more than 80 high-performance kernels with up to a dozen lines of code each, delivering performance comparable to, or better than, MKL, OpenBLAS, BLIS, and Halide.
exo-lang/exo: https://github.com/exo-lang/exo
Proposed Patches Would Allow Using Linux Kernel's Libperf from Python
> Those interested in the prospects of leveraging the libperf API from Python code can see this RFC patch series for all the details: https://lore.kernel.org/lkml/20250313075126.547881-1-gautam@... :
>> In this RFC series, we are introducing a C extension module to allow python programs to call the libperf API functions. Currently libperf can be used by C programs, but expanding the support to python is beneficial for python users.
Beta: Connect your Colab notebooks directly to Kaggle's Jupyter Servers
"Kaggle's notebook environment is now based on Colab's Docker image" https://news.ycombinator.com/item?id=42480582
kaggle/docker-python: https://github.com/Kaggle/docker-python
Google Colaboratory docs > Local runtimes: https://research.google.com/colaboratory/local-runtimes.html :
> https://us-docker.pkg.dev/colab-images/public/runtime
docker run --gpus=all -p 127.0.0.1:9000:8080 us-docker.pkg.dev/colab-images/public/runtimeIs Rust a good fit for business apps?
Rust is hard to get started with, but once you reach optimal development speed, I can't see how you can go back to any other language.
I have a production application that runs on Rust. It never crashed (yet), it's rock solid and does not require frequent restarts of the container due to memory leaks or whatever, and every time I need to fix something, I can jump into code after weeks of not touching it, and being confident that my changes won't break anything, as long as the code complies (and is free of logical bugs).
I can't say the same about any other language, and I use a few of them in production: NodeJS, Ruby, Python, Java. Each of them has their own quirks, and I'm never 100% confident that changes in one place of the code won't cause harm in another place, or that the code is free of stupid bugs like null-pointer exceptions.
100% agreed. After writing Rust as my day job and using it in production for the last 2 years, my only criticism is the lack of a good high level standard library (like Go) and the insanity and fragmentation of Future type signatures.
Though at this point, I wish I could write everything in Rust - it's fantastic
Also, I can't wait to (practically) use it as a replacement for JavaScript on the web
So you can't write everything in rust? You mean like business apps? Just asking.
Last I checked, WASM support isn't quite there, yet, basically. I haven't checked in a little while, though.
re: noarch, emscripten-32, and/or emscripten-wasm32 WASM packages of rust on emscripten-forge a couple days ago. [1][2]
emscripten-forge is a package repo of conda packages for `linux-64 emscripten-32 emscripten-wasm32 osx-arm64 noarch` built with rattler-build and hosted with quetz: https://repo.mamba.pm/emscripten-forge
Evcxr is a rust kernel for jupyter; but jupyter-xeus is the new (cpp) way to write jupyterlite kernels like xeus-, xeus-sqlite, xeus-lua, xeus-javascript, xeus-javascript
[1]: evcxr_jupyter > "jupyter lite and conda-forge feedstock(s)" https://github.com/evcxr/evcxr/issues/399
[2]: emscripten-forge > "recipes_emscripten/rust and evxcr_jupyter kernel" https://github.com/emscripten-forge/recipes/issues/1983
container2wasm c2w might already compile rust to WASI WASM? https://github.com/container2wasm/container2wasm
Trump's Big Bet: Americans Will Tolerate Downturn to Restore Manufacturing
Is there are reason to think suddenly a bunch of manufacturing pops up and pushes prices down?
I’m not convinced there’s any real strategy here.
The SDGs have Goals, Targets, and Indicators.
What are the goals for domestic manufacturing?
FRED series tagged "Manufacturing" https://fred.stlouisfed.org/tags/series?t=manufacturing
"Manufacturers' New Orders: Total Manufacturing (AMTMNO)" https://fred.stlouisfed.org/series/AMTMNO
The DuckDB Local UI
The UI aesthetics look similar to the excellent Rill, also powered by DuckDB: https://www.rilldata.com/
Rill has better built in visualizations and pivot tables and overall a polished product with open-source code in Go/Svelte. But the DuckDB UI has very nice Jupyter notebook-style "cells" for editing SQL queries.
Rill founder here, I have no comment on the UI similarity :) but I would emphasize our vision is building DuckDB-powered metrics layers and exploratory dashboards -- which we presented at DuckCon #6 last month, PDF below [1] -- and less on notebook style UIs like Hex and Jupyter.
Rill is fully open-source under the Apache license. [2]
[1] https://blobs.duckdb.org/events/duckcon6/mike-driscoll-rill-...
WhatTheDuck does SQL with duckdb-wasm
Pygwalker does open-source descriptive statistics and charts from pandas dataframes: https://github.com/Kanaries/pygwalker
ydata-profiling does open-source Exploratory Data Analysis (EDA) with Pandas and Spark DataFrames and integrates with various apps: https://github.com/ydataai/ydata-profiling #integrations, #use-cases
xeus-sqlite is a xeus kernel for jupyter and jupyterlite which has Vega visualizations for sql queries: https://github.com/jupyter-xeus/xeus-sqlite
jupyterlite-xeus installs packages specified in an environment.yml from emscripten-forge: https://jupyterlite-xeus.readthedocs.io/en/latest/environmen...
emscripten-forge has xeus-sqlite and pandas and numpy and so on; but not yet duckdb-wasm: https://repo.mamba.pm/emscripten-forge
duckdb-wasm "Feature Request: emscripten-forge package" https://github.com/duckdb/duckdb-wasm/discussions/1978
Scientists discover an RNA that repairs DNA damage
ScholarlyArticle: "NEAT1 promotes genome stability via m6A methylation-dependent regulation of CHD4" (2025) https://genesdev.cshlp.org/content/38/17-20/915
"Supercomputer draws molecular blueprint for repairing damaged DNA" (2025) https://news.ycombinator.com/item?id=43349021
Supercomputer draws molecular blueprint for repairing damaged DNA
"Molecular architecture and functional dynamics of the pre-incision complex in nucleotide excision repair" (2025) https://www.nature.com/articles/s41467-024-52860-y
Also, "Scientists discover an RNA that repairs DNA damage" (2025) https://news.ycombinator.com/item?id=43313781
"NEAT1 promotes genome stability via m6A methylation-dependent regulation of CHD4" (2025) https://genesdev.cshlp.org/content/38/17-20/915
Tunable superconductivity and Hall effect in a transition metal dichalcogenide
ScholarlyArticle: "Tunable superconductivity coexisting with the anomalous Hall effect in a transition metal dichalcogenide" (2025) https://www.nature.com/articles/s41467-025-56919-2
D-Wave First to Demonstrate Quantum Supremacy on Useful, Real-World Problem
ScholarlyArticle: "Beyond-classical computation in quantum simulation" (2025) https://www.science.org/doi/10.1126/science.ado6285
Move over graphene Scientists forge bismuthene and host of atoms-thick metals
From https://news.ycombinator.com/item?id=43337868 :
> The new bismuth-based transistor could revolutionize chip design, offering higher efficiency while bypassing silicon’s limitations [...]
Can bismuthene and similar be nanoimprinted?
Peer-to-peer file transfers in the browser
I keep a long list of browser based and CLI p2p file transfer tools[1].
LimeWire (which now has its crypto currency probably) has been on a rampage recently aquiring some of really good tools including ShareDrop and SnapDrop. https://pairdrop.net/ is the last one standing so far.
[1]: https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...
/? inurl:awesome p2p site:github.com: https://google.com/search?q=inurl:awesome+p2p+site:github.co...
Peer-to-peer: https://en.wikipedia.org/wiki/Peer-to-peer
How to build your own replica of TARS from Interstellar
mujoco > "Renderer API" https://github.com/google-deepmind/mujoco/issues/2487 re: renderers in addition to Unity
Maybe a 3d robot on screen?
Low-power 2D gate-all-around logics via epitaxial monolithic 3D integration
ScholarlyArticle: "Low-power 2D gate-all-around logics via epitaxial monolithic 3D integration" (2025) https://www.nature.com/articles/s41563-025-02117-w
"China’s new silicon-free chip beats Intel with 40% more speed and 10% less energy" (2025) https://interestingengineering.com/innovation/chinas-chip-ru... :
> The new bismuth-based transistor could revolutionize chip design, offering higher efficiency while bypassing silicon’s limitations [...]
> Their newly developed 2D transistor is said to be 40% faster than the latest 3-nanometre silicon chips from Intel and TSMC while consuming 10% less energy. This innovation, they say, could allow China to bypass the challenges of silicon-based chipmaking entirely.
> “It is the fastest, most efficient transistor ever,” according to an official statement published last week on the PKU website.
Does Visual Studio rot the mind? (2005)
Absolutely it does. In the same way Socrates warned us about books, VS externalizes our memory and ability. and makes us reliant on a tool to accomplish something we have the ability to do without. This reliance goes even further to make us dependent on it as our natural ability withers from neglect.
I cannot put it more plainly that it incentives us to make a part of us atrophy. It would be like us giving up the ability to run a mile because our reliance on cars weakened our legs and de-conditioned us to the point of making it physically impossible.
Cloudflare: New source of randomness just dropped
Another source of random entropy better than a wall of lava lamps:
>> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
But is that good enough random for rngd to continually re-seed an rng with?
(Is our description of such measurable fluctuations in the quantum foam inadequate to predict what we're calling random?)
> google/paranoid_crypto.lib.randomness_tests: https://github.com/google/paranoid_crypto/tree/main/paranoid... .. docs: https://github.com/google/paranoid_crypto/blob/main/docs/ran...
Stem cell therapy trial reverses "irreversible" damage to cornea
From https://news.ycombinator.com/item?id=37204123 (2023) :
> From "Sight for sore eyes" (2009) https://newsroom.unsw.edu.au/news/health/sight-sore-eyes :
>> "A contact lens-based technique for expansion and transplantation of autologous epithelial progenitors for ocular surface reconstruction" (2009) http://dx.doi.org/10.1097/TP.0b013e3181a4bbf2
>> In this study, they found that only one brand (Bausch and Lomb IIRC) of contact lens worked well as a scaffold for the SC
Good to see stem cell research in the US.
Stem cell laws and policy in the United States: https://en.m.wikipedia.org/wiki/Stem_cell_laws_and_policy_in...
If you witness a cardiac arrest, here's what to do
From https://news.ycombinator.com/item?id=39850383#39863280 :
> Basic life support (BLS) https://en.wikipedia.org/wiki/Basic_life_support :
>> DRSABCD: Danger, Response, Send for help, Airway, Breathing, CPR, Defibrillation
> "Drs. ABCD"
From "Defibrillation devices save lives using 1k times less electricity" (2024) https://news.ycombinator.com/item?id=42061556 :
> "New defib placement increases chance of surviving heart attack by 264%" (2024) https://newatlas.com/medical/defibrillator-pads-anterior-pos... :
>> Placing defibrillator pads on the chest and back, rather than the usual method of putting two on the chest, increases the odds of surviving an out-of-hospital cardiac arrest by more than two-and-a-half times, according to a new study.
From SBA.gov blog > "Review Your Workplace Safety Policies" (2019) https://www.sba.gov/blog/review-your-workplace-safety-polici... :
> Also, consider offering training for CPR to employees. Be sure to have an automatic external defibrillator (AED) on site and have employees trained on how to use it. The American Red Cross and various other organizations offer free or low-cost training.
Ask HN: Optical Tweezers for Neurovascular Resection?
Applications, Scale, Feasibility?
Optical tweezers: https://en.wikipedia.org/wiki/Optical_tweezers
NewsArticle: "Engineers create a chip-based tractor beam for biological particles" (2024) https://phys.org/news/2024-10-chip-based-tractor-biological-...
ScholarlyArticle: "Optical tweezing of microparticles and cells using silicon-photonics-based optical phased arrays" (2024) https://www.nature.com/articles/s41467-024-52273-x
Europe bets once again on RISC-V for supercomputing
China recently moved that direction. That would be nice collaboration to see between EU and China.
China to publish policy to boost RISC-V chip use nationwide, sources say https://www.reuters.com/technology/china-publish-policy-boos...
If you ignore the military ambitions of China and the fact they’re openly sharing technology with Russia, perhaps.
I don’t see anything but regret for Europe several decades from now if they decide to start providing China with the technical expertise they’re currently lacking in this space.
This is all about China trying to find a way to escape the pressure of sanctions from Europe and the US.
Didn't ARM start in Europe?
And RISC-V started at UC Berkeley in 2010.
RISC-V: https://en.wikipedia.org/wiki/RISC-V
"Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490 with nanoimprinting (which 10x's current gen nanolithography FWIU)
"Nanoimprint Lithography Aims to Take on EUV" (2025) https://news.ycombinator.com/item?id=42575111 :
> Called nanoimprint lithography (NIL), it’s capable of patterning circuit features as small as 14 nanometers—enabling logic chips on par with Intel, AMD, and Nvidia processors now in mass production.
Online Embedded Rust Simulator
TX for this! My older kid and I are learning Rust with Rustlings, but this will help me add a physical element to it (even though it's simulated), eventually we may get an esp32. Younger kid loves doing Microsoft microbit, he will definitely be interested in this.
Wokwi also supports Pi Pico w/ Python: https://news.ycombinator.com/item?id=38034530 , https://news.ycombinator.com/item?id=36970206
This kit connects a BBC Microbit v2 to a USB-chargeable Li-Ion battery on a mountable expansion board with connectors for Motors for LEGOs ® and a sonar:bit ultrasonic sensor: "ELECFREAKS micro:bit 32 IN 1 Wonder Building Kit, Programmable K12 Educational Learning Kit with [MOC blocks / Technics®] Building Blocks/Sensors/Wukong Expansion Board" https://shop.elecfreaks.com/products/elecfreaks-micro-bit-32...
There are docs on GitHub for the kit: https://github.com/elecfreaks/learn-en/tree/master/microbitK... .. web: https://wiki.elecfreaks.com/en/microbit/building-blocks/wond...
"Raspberry Pico Rust support" https://github.com/wokwi/wokwi-features/issues/469#issuecomm... :
> For future people who come across this issue: you can still simulate Rust on Raspberry Pi Pico with Wokwi, but you'll have to compile the firmware yourself. Then you can load it into Wokwi for VS Code or Wokwi for Intellij.
Scientists Confirm the Existence of 'Second Sound'
ScholarlyArticle: "Thermography of the superfluid transition in a strongly interacting Fermi gas" (2025) https://www.science.org/doi/10.1126/science.adg3430
New visualization but not a new phenomenon. I observed it in the 510 lab when I was getting my physics PhD at Cornell.
Armchair physicist with dissonance about superfluids (or Bose-Einstein condensates) which break all the existing models.
And my armchair physicist notes; /? superfluid https://westurner.github.io/hnlog/ #search=superfluid
I think I've probably already directly mentioned Fedi's?
Finally found this; https://news.ycombinator.com/item?id=42957014 :
> Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
- [ ] Models fluidic attractor systems
- [ ] Models superfluids
- [ ] Models n-body gravity in fluidic systems
- [ ] Models retrocausality
Re: gauge theory, superfluids: https://news.ycombinator.com/item?id=43081303
He said there's a newer version of this:
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
People Are Paying $5M and $1M to Dine with Donald Trump
Sleaziest thing I've ever heard!
Isn't that illegal influence peddling?
For the record, when Buffet auctions a meeting for pay it's for charity; and he's not on the clock as a public servant.
For context,
Change.org (2007) https://en.wikipedia.org/wiki/Change.org
We The People (2009-01) https://en.wikipedia.org/wiki/We_the_People_(petitioning_sys... :
> The right "to petition the Government for a redress of grievances" is guaranteed by the United States Constitution's First Amendment. [...]
> Overview > Thresholds: Under the Obama administration's rules, a petition had to reach 150 signatures (Dunbar's Number) within 30 days to be searchable on WhiteHouse.gov, according to Tom Cochran, former director of digital technology. [8] It had to reach 100,000 signatures within 30 days to receive an official response. [9] The original threshold was set at 5,000 signatures on September 1, 2011,[10] was raised to 25,000 on October 3, 2011, [11] and raised again to 100,000 as of January 15, 2013. [12] The White House typically would not comment when a petition concerned an ongoing investigation. [13]
> Sleaziest
Sorry, that's ad hominem (name calling) but not a valid argument.
Is it civilly or criminally illegal for the standing president to do "sell the plate" fundraisers for PACs that aren't kickbacks?
Where is the current record of such receipts?
Kickback (bribery): https://en.wikipedia.org/wiki/Kickback_(bribery)
Influence peddling: https://en.wikipedia.org/wiki/Influence_peddling
US Constitution > Article II > Section 4: https://constitution.congress.gov/constitution/article-2/#ar...
> The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.
Impeachment in the United States: https://en.wikipedia.org/wiki/Impeachment_in_the_United_Stat... :
> The Constitution limits grounds of impeachment to "Treason, Bribery, or other high Crimes and Misdemeanors", [2] but does not itself define "high crimes and misdemeanors".
Presumably, Bribery needn't be defined in the Constitution because such laws and rules are the business of the Legislative and the Judicial branches, and such rules apply to all people.
Buffett is not a public servant in any way, on or off the clock. He is entirely a private citizen.
I agree. As a fund or holding company manager, Mr. Buffet has a fiduciary obligation to shareholders.
The president has a fiduciary obligation to the country.
https://harvardlawreview.org/print/vol-132/faithful-executio...
In terms of fiduciary and Constitutional - or Constitutional (including fiduciary) - obligations, is there any penalty for violating an Oath of Office?
US Constitution > Article II.S1.C8.1 "Oath of Office for the Presidency": https://constitution.congress.gov/browse/essay/artII-S1-C8-1... :
> Before he enter on the Execution of his Office, he shall take the following Oath or Affirmation:– I do solemnly swear (or affirm) that I will faithfully execute the Office of President of the United States, and will to the best of my Ability, preserve, protect and defend the Constitution of the United States.
US Constitution > Article II.S1.C8.1.5 "Violation of the Presidential Oath" https://constitution.congress.gov/browse/essay/artII-S1-C8-1...
Oath of office of the president of the United States: https://en.wikipedia.org/wiki/Oath_of_office_of_the_presiden...
Impeachment, removal from office, being barred from ever holding office again. Those are separate but need to happen in order.
That’s the limit of it, by design I think. The lack of criminality or even civil offense means the job is predicated on trust to achieve political goals. Once trust is lost, the individual must be removed from office or immense damage will ensue.
I've never heard of double jeopardy for impeachment; being impeached does not preclude criminal prosection for the same offense.
FBI's assessment of presidential immunity might should be reviewed in light of the court's recent ruling on same and the fact that they are an executive branch department of government. They work for the executive - as evidenced by the firing of James Comey - so we can't trust their assessment of his immunity.
AI tools are spotting errors in research papers: inside a growing movement
AI tools are introducing errors in research papers.
Wonder which is more common overall? Can AI spot more errors than it creates, or is it in equillibrium and a net zero?
Other than these quips, I am actually a fan of this movement. Anything that helps in the scientific process of peer review, as long as it does not actively annoy and delay the authors is a welcome addition to the process. Papers will be written, many of them incorrectly, with statistical or procedural errors. To have a checker that can find these quickly, ideally pre-publishing, is a great thing.
Also pro error finding with LLMs.
But concerned about tool dependency; if you can't do it without the AI, you can't support the code.
"LLMs cannot find reasoning errors, but can correct them" (2023) https://news.ycombinator.com/item?id=38353285
"New GitHub Copilot research finds 'downward pressure on code quality'" (2024) https://news.ycombinator.com/item?id=39168105
"AI generated code compounds technical debt" (2025) https://news.ycombinator.com/item?id=43185735 :
> “I don't think I have ever seen so much technical debt being created in such a short period of time"
Even if trained on only formally verified code, we should not expect LLMs to produce code that passes formal verification.
"Ask HN: Are there any objective measurements for AI model coding performance?" (2025) https://news.ycombinator.com/item?id=43206779
https://news.ycombinator.com/item?id=43061977#43069287 re: memory vulns, SAST DAST, Formal Verification, awesome-safety-critical
Covid-19 speeds up artery plaque growth, raising heart disease risk
From https://news.ycombinator.com/item?id=40681226 :
> "Cyclodextrin promotes atherosclerosis regression via macrophage reprogramming" (2016) https://www.science.org/doi/10.1126/scitranslmed.aad6100
> "Powdered Booze Could Fix Your Clogged Arteries" (2016) https://www.popsci.com/compound-in-powdered-alcohol-can-also...
FWIU, beta-cyclodextrin is already FDA approved, and injection of betacyclodextrin reversed arterio/atherosclerosis; possibly because our arteries are caked with sugar alcohol and beta-cyclodextrin absorbs alcohol.
Asteroid fragments upend theory of how life on Earth bloomed
> Not only does Bennu contain all 5 of the nucleobases that form DNA and RNA on Earth and 14 of the 20 amino acids found in known proteins, the asteroid’s amino acids hold a surprise.
> On Earth, amino acids in living organisms predominantly have a ‘left-handed’ chemical structure. Bennu, however, contains nearly equal amounts of these structures and their ‘right-handed’, mirror-image forms, calling into question scientists’ hypothesis that asteroids similar to this one might have seeded life on Earth.
Hm, why would chirality need to be a consequence of the panspermia hypothesis? I thought the mission defined "necessary ingredients for life" as any bio markers that might have seeded a primordial soup on Earth.
I don’t know a ton about how chirality works. Couldn’t it just be that half(or some number) the asteroids contain left handed and half contain right? We only have a sample of one. Or is there something fundamental about left handed molecules that gives us reason to believe that if we see right handed ones once we would rarely see left handed ones in similar disconnected systems?
Chirality means that there is a mirror image of a molecule that cannot be twisted into the original shape, despite being structurally identical. Due to the particular ways molecules tickle each other in living organisms to do interesting things, that means that the mirror image (racemate) of a molecule does something different.
In chemical synthesis, most (but not all) processes tend to preserve chirality of molecules: replacing a bunch of atoms in a molecule with another set will tend to not cause the molecule to flip to a mirror image. If you start from an achiral molecule (one where its mirror image can be rotated to the original), almost all processes tend to end up with a 50-50 mix of the two racemates of the product.
In biochemistry, you can derive all of the amino acids and sugars from a single chiral molecule: glyceral. It turns out that nearly all amino acids end up in the form derived from L-glyceral and nearly all sugars come from D-glyceral. The question of why this is the case is the question of homochirality.
There's as yet no full answer to the question of homochirality. We do know that a slight excess in one racemate tends to amplify into a situation where only that racemate occurs. But we don't know if the breakdown into L-amino acids and D-sugars (as opposed to D-amino acids and L-sugars) happened by pure chance or if there is some specific reason that L-amino acids/D-sugars is preferred.
"I Applied Wavelet Transforms to AI and Found Hidden Structure" https://news.ycombinator.com/item?id=42956262 re: CODES, chirality, and chiral molecules whose chirality results in locomotion
Do any of these affect the fields that would have selected for molecules on Earth? The Sun's rotation, Earth's rotation, the direction of revolution in our by now almost coplanar solar system, Galactic rotation
Launch HN: Enhanced Radar (YC W25) – A safety net for air traffic control
Hey HN, we’re Eric and Kristian of Enhanced Radar. We’re working on making air travel safer by augmenting control services in our stressed airspace system.
Recent weeks have put aviation safety on everyone’s mind, but we’ve been thinking about this problem for years. Both of us are pilots — we have 2,500 hours of flight time between us. Eric flew professionally and holds a Gulfstream 280 type rating and both FAA and EASA certificates. Kristian flies recreationally, and before this worked on edge computer vision for satellites.
We know from our flying experience that air traffic management is imperfect (every pilot can tell stories of that one time…), so this felt like an obvious problem to work on.
Most accidents are the result of an overdetermined “accident chain” (https://code7700.com/accident_investigation.htm). The popular analogy here is the swiss cheese model, where holes in every slice line up perfectly to cause an accident. Often, at least one link in that chain is human error.
We’ll avoid dissecting this year’s tragedies and take a close call from last April at DCA as an example:
The tower cleared JetBlue 1554 to depart on Runway 04, but simultaneously a ground controller on a different frequency cleared a Southwest jet to cross that same runway, putting them on a collision course. Controllers noticed the conflict unfolding and jumped in to yell at both aircraft to stop, avoiding a collision with about 8 seconds to spare (https://www.youtube.com/watch?v=yooJmu30DxY).
Importantly, the error that caused this incident occurred approximately 23 seconds before the conflict became obvious. In this scenario, a good solution would be a system that understands when an aircraft has been cleared to depart from a runway, and then makes sure no aircraft are cleared to cross (or are in fact crossing) that runway until the departing aircraft is wheels-up. And so on.
To do this, we’ve developed Yeager, an ensemble of models including state of the art speech-to-text that can understand ATC audio. It’s trained on a large amount of our own labeled ATC audio collected from our VHF receivers located at airports around the US. We improve performance by injecting context such as airport layout details, nearby/relevant navaids, and information on all relevant aircraft captured via ADS-B.
Our product piggy-backs on the raw signal in the air (VHF radio from towers to pilots) by having our own antennas, radios, and software installed at the airport. This system is completely parallel to existing infrastructure, requires zero permission, and zero integration. It’s an extra safety net over existing systems (no replacement required). All the data we need is open-source and unencrypted.
Building models for processing ATC speech is our first step toward building a safety net that detects human error (by both pilots and ATC). The latest system transcribes the VHF control audio at about ~1.1% WER (Word Error Rate), down from a previous record of ~9%. We’re using these transcripts with NLP and ADS-B (the system that tracks aircraft positions in real time) for readback detection (ensuring pilots correctly repeat ATC instructions) and command compliance.
There are different views about the future of ATC. Our product is naturally based on our own convictions and experience in the field. For example, it’s sometimes said that voice comms are going away — we think they aren’t (https://www.ericbutton.co/p/speech). People also point out that airplanes are going to fly themselves — in fact they already do. But passenger airlines, for example, will keep a pilot onboard (or on the ground) with ultimate control, for a long time from now; the economics and politics and mind-boggling safety and legal standards for aviation make this inevitable. Also, while next-gen ATC systems like ASDE-X are already in place, they don’t eliminate the problem. The April 2024 scenario mentioned above occurred at DCA, an ASDE-X-equipped airport.
America has more than 5,000 public-use airports, but only 540 of these have control towers (due to cost). As a result, there are over 100 commercial airline routes that fly into uncontrolled airports, and 4.4M landings at these fields. Air traffic control from first principles looks significantly more automated, more remote-controlled, and much cheaper — and as a result, much more widespread.
We’ve known each other for 3 years, and decided independently that we needed to work on air traffic. Having started on this, we feel like it’s our mission for the next decade or two.
If you’re a pilot or an engineer who’s thought about this stuff, we’d love to get your input. We look forward to hearing everyone’s thoughts, questions, ideas!
Can any aircraft navigation system plot drone Remote ID beacons on a map?
How sensitive of a sensor array is necessary to trilaterate Remote ID signals and birds for aircraft collision avoidance?
A Multispectral sensor array (standard) would probably be most robust.
From https://news.ycombinator.com/item?id=40276191 :
> Are there autopilot systems that do any sort of drone, bird, or other aerial object avoidance?
A lot of drones these days will have ADS-B. The ones that don't probably have geo-fencing to keep them away from airports. There's also all kinds of drone detection systems based on RF emittance.
The bird problem is a whole other issue. Mostly handled by PIREP today if birds are hanging out around an approach/departure path.
Computer vision here is definitely going to be useful long term.
FWIU geo-fencing was recently removed from one brand of drones.
Thermal: motors, chips, heatsinks, and batteries are warm but the air is colder around propellers; RF: motor RF, circuit RF, battery RF, control channel RF, video channel RF, RF from federally required Remote ID or ADS-B beacons, gravitational waves
Aircraft have less time to recover from e.g. engine and windshield failure at takeoff and landing at airports; so all drones at airports must be authorized by ATC Air Traffic Control: it is criminally illegal to fly a drone at the airport without authorization because it endangers others.
Tagging a bird on the 'dashcam'+radar+sensors feed could create a PIREP:
PIREP: Pilot's Report: https://en.wikipedia.org/wiki/Pilot_report
Looks like "birds" could be coded as /SK sky cover, /WX weather and visiblity, or /RM remarks with the existing system described on wikipedia.
Prometheus (originally developed by SoundCloud) does pull-style metrics: each monitored server hosts over HTTP(S) a document in binary prometheus format that the centralized monitoring service pulls from whenever they get around to it. This avoids swamping (or DOS'ing) the centralized monitoring service which must scale to the number of incoming reports in a push-style monitoring system.
All metrics for the service are included in the one (1) prometheus document, which prevents requests for monitoring data from exhausting the resources of the monitored server. It is up to the implementation to determine whether to fill with nulls if sensor data is unavailable, or to for example fill forward with the previous value if sensor data is unavailable for one metric.
Solutions for birds around runways and in flight paths and around wind turbines:
- Lights
- Sounds: human audible, ultrasonic
- Thuds: birds take flight when the ground shakes
- Eyes: Paint large eyes on signs by the runways
> Sounds and Thuds [that scare birds away]
In "Glass Antenna Turns windows into 5G Base Stations" https://news.ycombinator.com/item?id=41592848 or a post linked thereunder, I mentioned ancient stone lingams on stone pedestals which apparently scare birds away from temples when they're turned.
/? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
Are some ancient stone lingams also piezoelectric voice transducer transmitters, given water over copper or gold between the lingam and pedestal and given the original shape of the stones? Also, stories of crystals mounted on pyramids and towers.
Could rotating large stones against stone scare birds away from runways?
Remote ID: https://en.wikipedia.org/wiki/Remote_ID
Airborne collision avoidance system: https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...
"Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?" https://news.ycombinator.com/item?id=42665458
Are there already bird not a bird datasets?
Procedures for creating "bird on Multispectral plane radar and video" dataset(s):
Tag birds on the dashcam video with timecoded sensor data and a segmentation and annotation tool.
Pinch to zoom, auto-edge detect, classification probability, sensor status
voxel51/fiftyone does segmentation and annotation with video and possibly Multispectral data: https://github.com/voxel51/fiftyone
Oh, weather radar would help pilots too:
From https://news.ycombinator.com/item?id=43260690 :
> "The National Weather Service operates 160 weather radars across the U.S. and its territories. Radar detects the size and motion of particles in rain, snow, hail and dust, which helps meteorologists track where precipitation is falling. Radar can even indicate the presence of a tornado [...]"
From today, just now; fish my wish!
"Integrated sensing and communication based on space-time-coding metasurfaces" (2025-03) https://news.ycombinator.com/item?id=43261825
NASA uses GPS on the moon for the first time
> the Lunar GNSS Receiver Experiment (LuGRE) [is] one of the 10 projects packed aboard Blue Ghost. [...]
> However, LuGRE’s achievements didn’t only begin after touchdown on the moon. On January 21, the instrument broke NASA’s record for highest altitude GNSS signal acquisition at 209,900 miles from Earth while traveling to the moon. That record continued to rise during Blue Ghost’s journey over the ensuing days, peaking at 243,000 miles from Earth after reaching lunar orbit on February 20.
New Benchmark in Quantum Computational Advantage with 105-Qubit Processor
ScholarlyArticle: "Establishing a New Benchmark in Quantum Computational Advantage with 105-qubit Zuchongzhi 3.0 Processor" (2025) https://arxiv.org/abs/2412.11924
NewsArticle: "Superconducting quantum processor prototype operates 10^15 times faster than fastest supercomputer" https://phys.org/news/2025-03-superconducting-quantum-proces... :
> The quantum processor achieves a coherence time of 72 μs, a parallel single-qubit gate fidelity of 99.90%, a parallel two-qubit gate fidelity of 99.62%, and a parallel readout fidelity of 99.13%. The extended coherence time provides the necessary duration for performing more complex operations and computations. [...]
> To evaluate its capabilities, the team conducted an 83-qubit, 32-layer random circuit sampling task on the system. Compared to the current optimal classical algorithm, the computational speed surpasses that of the world's most powerful supercomputer by 15 orders of magnitude. Additionally, it outperforms the latest results published by Google in October of last year by 6 orders of magnitude, establishing the strongest quantum computational advantage in the superconducting system to date.
Integrated sensing and communication based on space-time-coding metasurfaces
ScholarlyArticle: "Integrated sensing and communication based on space-time-coding metasurfaces" (2025) https://www.nature.com/articles/s41467-025-57137-6
NewsArticle: "Space-time-coding metasurface could transform wireless networks with dual-functionality for 6G era" https://techxplore.com/news/2025-03-space-coding-metasurface...
How the U.K. broke its own economy
This video explains who paid for Brexit and Trump: "Canadian company tied to Brexit and Trump backers" (2019) https://youtube.com/watch?v=alNjJVpO8L4&
Cambridge Analytica > United Kingdom: https://en.wikipedia.org/wiki/Cambridge_Analytica#Channel_4_...
"New Evidence Emerges of Steve Bannon and Cambridge Analytica’s Role in Brexit" (2018) https://www.newyorker.com/news/news-desk/new-evidence-emerge...
"‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower" (2018) https://www.theguardian.com/news/2018/mar/17/data-war-whistl...
Facebook–Cambridge Analytica data scandal: https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana... ($4b FTC fine)
Furthermore,
"3.5M Voters Were Purged During 2024 Presidential Election [video]" (2025) https://news.ycombinator.com/item?id=43047349
They only "won" by like 1.5m votes.
Who's illegitimate? https://www.google.com/search?q=Trump+birthrarism
Live Updates: China and Canada Retaliate Against New Trump Tariffs
I'm learning that when things go to shit smart money looks for opportunity.
Anyone want to start up a company selling pre-fab shipping-container houses with me?
More broadly, the kind of people who make money under tariff schemes already have investment managers.
I haven’t heard a Keynesian say these tariffs will achieve their stated purpose.
The US tariffs hurt US consumers and Canadian companies. The counter tariffs do the reverse. However, the US seem to be all-product while the Canadian ones seem tailored to "encourage" selection of a non-US alternative (and are geo-located as it were). Is the US ready to experience some pain, as those outside the US have been prepping for this sh*t for a while.
The counter tariffs are usually explicitly chosen to make it not hurt their own citizens while hurting red states, because alternative sources of the tariffed goods exists domestically or from other parts of the world. Americans need aluminum and steel and lumber and power. Canada does not need Kentucky whiskey and Harleys.
Kinda, yeah, and I get that everyone well have their pet good that they feel shouldn't be tariffed, but including fruits/vegetables/legumes very much hurts the consumer.
China seemed to focus on agricultural goods which it has been pushing for local industries for decades anyways. I think this was an olive branch in that it didn't really try to turn the screws?
Because of live updates, this snapshot is necessarily out of date: https://archive.ph/qK2mx
IKEA registered a Matter-over-Thread temperature sensor with the FCC
Matter (standard) https://en.wikipedia.org/wiki/Matter_(standard)
Connectivity Standards Alliance > Certified products: https://csa-iot.org/csa-iot_products/?p_keywords=Thermometer...
Nanoscale spin rectifiers for harvesting ambient radiofrequency energy
"Nanoscale spin rectifiers for harvesting ambient radiofrequency energy" (2024) https://www.nature.com/articles/s41928-024-01212-1
From "NUS researchers develop new battery-free technology to power electronic devices using ambient radiofrequency signals" (2024) https://news.nus.edu.sg/nus-researchers-develop-new-battery-... :
> To address these challenges, a team of NUS researchers, working in collaboration with scientists from Tohoku University (TU) in Japan and University of Messina (UNIME) in Italy, has developed a compact and sensitive rectifier technology that uses nanoscale spin-rectifier (SR) to convert ambient wireless radio frequency signals at power less than -20 dBm to a DC voltage.
> The team optimised SR devices and designed two configurations: 1) a single SR-based rectenna operational between -62 dBm and -20 dBm, and 2) an array of 10 SRs in series achieving 7.8% efficiency and zero-bias sensitivity of approximately 34,500 mV/mW. Integrating the SR-array into an energy harvesting module, they successfully powered a commercial temperature sensor at -27 dBm.
Passive Wi-Fi or "backscatter redirection Wi-Fi": https://en.wikipedia.org/wiki/Passive_Wi-Fi :
> The system used tens of microwatts of power, [2] 10^−4 less energy than conventional Wi-fi devices, and one thousandth the energy of Bluetooth LE and Zigbee communications standards. [1]
Ask HN: Where are the good Markdown to PDF tools (that meet these requirements)?
I'm trying to convert a very large Markdown file (a couple hundred pages) to PDF.
It contains lots of code in code blocks and has a table of contents at the start with internal links to later pages.
I've tried lots of different Markdown-PDF converters like md2pdf and Pandoc, even trying converting it through LaTeX first, however none of them produce working internal PDF links, have effective syntax highlighting for HTML, CSS, JavaScript and Python, and wrap code to fit it on the page.
I have a very long regular expression (email validation of course) that doesn't fit on one line but no solutions I have found properly break the lines on page overflow.
What tools does everyone recommend?
Understanding Smallpond and 3FS
smallpond: https://github.com/deepseek-ai/smallpond :
> A lightweight data processing framework built on DuckDB and 3FS.
[deleted]
SEC Declares Memecoins Are Not Subject to Oversight
By SEC, or in law in the US because DOGE?
Can QBT Qualified Blind Trusts own memecoins, or is that still a conflict of interest? https://news.ycombinator.com/item?id=43201808
trump sets the direction of SEC enforcement and he's made it clear that he thinks he (and everyone else) should be able to rob normal people via cryptocurrency.
Rob Trump or a crony and I bet the rules of the game change fast.
These people do not have principles.
Collectible dolls aren't SEC regulated as investments. But they're still subject to laws protecting property rights.
Collectibles are also subject to financial reporting requirements that apply according to the USD value at time of purchase and sale.
As I said before in the past regarding "Elephant in the room: Quantum computers will destroy Bitcoin" https://news.ycombinator.com/item?id=43188345#43188777 :
> The market does not appear to cost infosec value, risk, or technical debt into cryptoasset prices.
> PQ or non-PQ does not predict asset price in 2025-02.
But what about DOGE?
Jeff Foxworthy, a comedian, once said:
> Sophisticated people invest their money in stock portfolios.
> Rednecks invest their money in commemorative plates.
Are NFTs or memecoins more similar to (NASCAR) commemorative plates?
That is the perfect analogy both for what NFTs are and who buys them. I’m stealing that.
So do we blame Biden or Trump for delisting XRP, or is SEC respectably independent?
What about CFTC and FTC and the CAT Consolidated Audit Trail; are collectibles over 10K exempt from KYC and AML there too?
(By comparison, banks put days-long holds on large checks.)
There's no such thing as an "independent" executive-branch agency. Congress can't by law create new branches of government.
Is that really true: isn't the Federal Reserve (intentionally designed to be) very independent from both Congress and the White House?
The Federal Reserve is a bit odd since it's a mix of private corporations & public governance. The Federal Reserve Board of Governors (BOG) is part of the government, the Federal Reserve Banks are private corporations whose officer's salaries are approved by the BOG but whose officers are elected by the Member Banks (banks in the US are required to be members). The Federal Open Market Committee (FOMC) is a mix of the BOG & some of the presidents of the various Federal Reserve Banks.
So the BOG isn't independent of the executive branch (it legally can't be), but the FOMC is partly independent since it's a mix of executive branch employees & private bank employees.
That's generally correct, but the issue is more about what kinds of powers are being exercised. Congress can create government-owned corporations, like Amtrak. Those corporations can function independently, insofar as they are not exercising executive powers.
The core function of the Fed isn't an executive power. It's not enforcing the law, or interacting with foreign countries. It's a bank that lends to other banks, and influences the market through that economic function. That's not an executive power and doesn't need to be subject to executive control.
The Fed also performs some executive functions (promulgating and enforcing various regulations). I'd argue those must be under executive control. But that doesn't address the Fed's core interest-rate-setting function.
The recent tariff spat by the tantrum thrower is supposably justified because of the drug overdose rate.
All lives matter.
List of causes of death by rate: https://en.wikipedia.org/wiki/List_of_causes_of_death_by_rat... :
> Substance abuse: 0.58 %
What about the the other 99.4 % of the causes of death, in terms of federal priorities?
What about NIH and NSF funding for medical sciences research?
/? tariff authorization: https://tinyurl.com/certainplansforusall
"Are Drinking Straws Dangerous? (2017)" https://news.ycombinator.com/item?id=43041625
/? plastic straws executive order: https://www.google.com/search?q=plastic+straws+executive+ord...
The recently proposed budget would increase the deficit by 16 trillion dollars on 40 trillion?
Is an Amendment to the Constitution necessary to limit a right according to disability or to create an independent agency which is independent from the Executive (Justice), Legislative, and Judicial branches?
Define "Independent Agency"?
GSA General Services Administration: https://en.wikipedia.org/wiki/General_Services_Administratio... :
> The General Services Administration (GSA) is an independent agency of the United States government established in 1949 to help manage and support the basic functioning of federal agencies.
Independent agencies of the United States federal government: https://en.wikipedia.org/wiki/Independent_agencies_of_the_Un... :
> In the United States federal government, independent agencies are agencies that exist outside the federal executive departments (those headed by a Cabinet secretary) and the Executive Office of the President. [1] In a narrower sense, the term refers only to those independent agencies that, while considered part of the executive branch, have regulatory or rulemaking authority and are insulated from presidential control, usually because the president's power to dismiss the agency head or a member is limited.
> Established through separate statutes passed by Congress, each respective statutory grant of authority defines the goals the agency must work towards, as well as what substantive areas, if any, over which it may have the power of rulemaking. These agency rules (or regulations), when in force, have the power of federal law. [2]
However, like Acts of Congress and Executive Orders, such rules are not Constitutional Amendments.
> Examples of independent agencies: These agencies are not represented in the cabinet and are not part of the Executive Office of the president:
> [ Amtrak, CIA, FCC, FDIC, FEC, Federal Reserve, FERC, FTC, CFTC, SSA, TVA, NASA, NARA, OPM, ]
> Define "Independent Agency"?
I said independent executive-branch agency. The very first sentence of Article II says: "The executive Power shall be vested in a President of the United States of America." Congress can't create an agency that exercises "the executive Power" that's independent of the President. In the same way that Congress can't create an unelected mini-Congress that enacts laws binding on citizens, and can't create courts outside the judiciary branch that can convict people for federal crimes.
> [ Amtrak, CIA, FCC, FDIC, FEC, Federal Reserve, FERC, FTC, CFTC, SSA, TVA, NASA, NARA, OPM, ]
These entities all differ in whether they're exercising "executive power" or not. Amtrak doesn't meaningfully exercise executive power. Congress can provide for Amtrak to be independent of the President's control. Or to use another example, Congress could probably create a bank that provides student loans that's independent of Presidential control.
But the SEC is a quintessential executive-branch agency. It enacts rules that interpret the securities laws and can prosecute people for violations of securities laws.
> In a narrower sense, the term refers only to those independent agencies that, while considered part of the executive branch, have regulatory or rulemaking authority and are insulated from presidential control, usually because the president's power to dismiss the agency head or a member is limited.
Congress must confirm candidate appointees to independent agencies, otherwise they are not independent of the Executive.
Can the President terminate Congressionally-approved nominations without regard for their service? They can.
Is the President totally immune? They are not.
When can't the executive pardon themselves?
PEP 486 – Make the Python Launcher aware of virtual environments (2015)
It seems to me py launcher could have done the same thing as uv such as downloading and setting up python and managing virtual environments.
It may that there were so many distros of Python by the time that venv (and virtualenv and virtualenvwrapper) were written.
"PEP 3147 – PYC Repository Directories" https://peps.python.org/pep-3147/ :
> Linux distributions such as Ubuntu [4] and Debian [5] provide more than one Python version at the same time to their users. For example, Ubuntu 9.10 Karmic Koala users can install Python 2.5, 2.6, and 3.1, with Python 2.6 being the default. [...]
> Because these distributions cannot share pyc files, elaborate mechanisms have been developed to put the resulting pyc files in non-shared locations while the source code is still shared. Examples include the symlink-based Debian regimes python-support [8] and python-central [9]. These approaches make for much more complicated, fragile, inscrutable, and fragmented policies for delivering Python applications to a wide range of users. Arguably more users get Python from their operating system vendor than from upstream tarballs. Thus, solving this pyc sharing problem for CPython is a high priority for such vendors.
> This PEP proposes a solution to this problem.
> Proposal: Python’s import machinery is extended to write and search for byte code cache files in a single directory inside every Python package directory. This directory will be called __pycache__.
Should the package management tool also install multiple versions of the interpreter? conda, mamba, pixi, and uv do. Neither tox nor nox nor pytest care where the python install came from.
And then of course cibuildwheel builds binary wheels for Win/Mac/Lin and manylinux wheels for libc and/or musl libc. repairwheel, auditwheel, delocate, and delvewheel bundle shared library dependencies (.so and DLL) into the wheel, which is a .zip file with a .whl extension and a declarative manifest that doesn't require python code to run as the package installer.
https://news.ycombinator.com/item?id=42347468
repairwheel: https://github.com/jvolkman/repairwheel :
> It includes pure-python replacements for external tools like patchelf, otool, install_name_tool, and codesign, so no non-python dependencies are required.
pip used to support virtualenvs.
pip 0.2 (2008) https://pypi.org/project/pip/0.2/ :
> pip is complementary with virtualenv, and it is encouraged that you use virtualenv to isolate your installation.
https://pypi.org/project/pip/0.2/#using-pip-with-virtualenv :
pip install -E venvpath/ pkg1 pkg2
When was the -E <virtualenv> flag removed from pip and why? pip install --help | grep "\-E""PEP 453 – Explicit bootstrapping of pip in Python installations" https://peps.python.org/pep-0453/#changes-to-virtual-environ... :
> Python 3.3 included a standard library approach to virtual Python environments through the venv module. Since its release it has become clear that very few users have been willing to use this feature directly, in part due to the lack of an installer present by default inside of the virtual environment. They have instead opted to continue using the virtualenv package which does include pip installed by default.
"why venv install old pip?" re: `python -m ensurepip && python -m pip install -U pip` https://github.com/python/cpython/issues/74813
> When was the -E <virtualenv> flag removed from pip and why?
Though `pip install --python=... pkg` won't work ( https://github.com/pypa/pip/pull/12068 ),
Now, there's
pip --python=$VIRTUAL_ENV/bin/python install pkgGSA Eliminates 18F
Amazing how the idea of being able to file tax returns online, without paying for a commercial service to do it, is considered far-left extremism in the US.
You’d think that simplifying tax returns and reducing costs in the taxation system would be something the right could get behind?
no, the right wants you to keep paying Turbotax.
"Musk ally is moving to close office behind free tax filing program at IRS" (2025) https://news.ycombinator.com/item?id=43222216
18F is not the group behind Direct File, and it's very annoying that this narrative keeps getting spread. DF was created by a team of people from USDS, 18F and the IRS. It's housed within the IRS.
Inheriting is becoming nearly as important as working
People really should look at the work of Gary Stevenson here: https://youtu.be/TflnQb9E6lw
The truth is economic growth hasn’t been occurring in real terms for most people for a long time and the rich have been transferring money from the poor to themselves at a dramatic rate.
I’m starting to think the entire system is corrupt and we are headed for a destroyed Europe and a civil war in the US. Maybe I’m very pessimistic but this moment in history feels like the end of the American empire, what comes after this is extremely uncertain but people only seem to demand a fair piece of the wealth after a world war.
From https://news.ycombinator.com/item?id=43140675 :
> Gini Index: https://en.wikipedia.org/wiki/Gini_coefficient
Find 1980 on this chart of wealth inequality in the US:
> GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
Are there additional measures of wealth inequality?
Income distribution: https://en.wikipedia.org/wiki/Income_distribution
World Inequality Database: https://en.wikipedia.org/wiki/World_Inequality_Database
USA!: https://wid.world/country/usa/
Economic inequality: https://en.wikipedia.org/wiki/Economic_inequality :
> Economic inequality is an umbrella term for a) income inequality or distribution of income (how the total sum of money paid to people is distributed among them), b) wealth inequality or distribution of wealth (how the total sum of wealth owned by people is distributed among the owners), and c) consumption inequality (how the total sum of money spent by people is distributed among the spenders).
Musk ally is moving to close office behind free tax filing program at IRS
/? turbotax https://hn.algolia.com/?q=turbotax
"TurboTax’s 20-Year Fight to Stop Americans from Filing Taxes for Free" (2019) https://news.ycombinator.com/item?id=21281411
Hash Denial-of-Service Attack in Multiple QUIC Implementations
No!
> Out of these functions, only SipHash provides significant security guarantees in this context; crafting collisions for the first two can be done fairly efficiently.
You can craft collisions against SipHash also trivially. "significant" is not significant, and using SipHash would also make it much slower, comparable to a linked list search.
The only fix for DOS attacks against hashtables is to fix the collision resolution method. Eg from linear to logarithmic. Or detecting such attacks (ie collision counting). Using a slower, better hash function is fud, because every hash function will produce collisions in hash tables. Even SHA-256
https://github.com/rurban/smhasher/?tab=readme-ov-file#secur...
I know I've looked it up before, but for whatever reason I can't explain how open address hashmaps work; how does it know to check the next bucket on read after a collision on insert?
I know it's a dumb question, and that I've researched it before. Strange.
(I recall Python first reporting the open address hashing hash randomization disclosure vuln that resulted in the PYTHONHASHSEED env var, and then dict becoming an odict OrderedHashMap by default.)
/? The only fix for DOS attacks against hashtables is to fix the collision resolution method. https://www.google.com/search?q=The+only+fix+for+DOS+attacks...
"Defending Hash Tables from Subterfuge with Depth Charge" (2024) https://dl.acm.org/doi/fullHtml/10.1145/3631461.3631550
"Defending hash tables from algorithmic complexity attacks with resource burning" (2024) https://www.sciencedirect.com/science/article/abs/pii/S03043...
Hash collision: https://en.wikipedia.org/wiki/Hash_collision
Electric Propulsion Magnets Ready for Space Tests
How long will it be before this new space propulsion capability can be scaled in order to put the ISS on the moon instead of in the ocean?
Long after it already is. The ISS is aging, and there was every intention of retiring it even before its principal sponsors started a proxy war.
There's certainly zero chance that it could be on the moon. It wasn't designed to survive on a surface. It would not be able to support itself.
If we want something in orbit around the moon, it will still be far cheaper to build a new thing designed for that purpose.
It does seem a shame that it can't be recovered and donated to museums or preserved in some way. I assume that it would be prohibitively expensive/complicated to try to do so, but it's a huge part of the history of space research, and it's a bit of a bummer to just throw it away.
I mean, if Starship works well, it could theoretically retrieve the ISS in a piecemeal fashion.
Orbital refuelling of non-satellite spacecraft at greater than IDK a few km has not been demonstrated.
Rendezvouzing and pushing or pulling a 900K lb (350K kg) object in orbit has never been demonstrated.
In-orbit outer hull repair on a vessel with occupants has also never been demonstrated?
Are there NEO avoidance plans that do not involve fracturing the object into orbital debris on approach?
Violence alters human genes for generations, researchers discover
> The idea that trauma and violence can have repercussions into future generations should help people be more empathetic, help policymakers pay more attention to the problem of violence
This seems like a pretty charitable read on policymakers. We inflict violence all the time that has multigenerational downstream effects without a genetic component and we don’t really care about the human cost, why would adding a genetic component change anything?
Because later generations shouldn't be forced pay the cost for such violence that they didn't perpetrate or perpetuate.
To make it real when there's no compassion, loving kindness, or the golden rule:
War reparations: https://en.wikipedia.org/wiki/War_reparations
I think you may want to re-read parent. They are saying that the reason you gave ( all of which were provided before ) barely restrained human kind from doing what it /was/is/will be doing.
If compassion and war reparations are insufficient to deter unjust violence, what will change the reinforced behaviors?
You have to reach the children and rehabilitate them. This is the type of damage the children are growing up with (HBO documentary from 2004 which I highly recommend people watch, the journalist got fatally shot filming it):
https://www.youtube.com/watch?v=Isa5TRnidnk#t=30m30s
Counting on your fingers the dead. This has to be a rehabilitation effort, because it's just no way for children to talk and be.
Family therapy > Summary of theories and techniques: https://en.wikipedia.org/wiki/Family_therapy#Summary_of_theo...
Expressive therapies: https://en.wikipedia.org/wiki/Expressive_therapies
Systemic therapy: https://en.wikipedia.org/wiki/Systemic_therapy :
> Based largely on the work of anthropologists Gregory Bateson and Margaret Mead, this resulted in a shift towards what is known as "second-order cybernetics" which acknowledges the influence of the subjective observer in any study, essentially applying the principles of cybernetics to cybernetics – examining the examination.
"What are the treatment goals?"
Clean Language: https://en.wikipedia.org/wiki/Clean_language
...
In "Jack Ryan" (TV Series) Season 1 Episode 1, the children suffer from war trauma in their upbringing and that pervades their lives: https://en.wikipedia.org/wiki/Jack_Ryan_(TV_series)#Season_1...
In "Life is Beautiful" (1997) Italian children are trapped in war: https://en.wikipedia.org/wiki/Life_Is_Beautiful
Magneto from X-Men (with the helmet) > Fictional character biography > Early life: https://en.wikipedia.org/wiki/Magneto_(Marvel_Comics)#Early_...
"Chronicles of Narnia" (1939-1949) > Background and conception: https://en.wikipedia.org/wiki/The_Chronicles_of_Narnia#Backg...
War reparations gave us the Nazis, so they clearly don’t work. And compassion has given us everything we have seen thus far in history so we can conclude that too is ineffective.
Reparations certainly deterred further violent fascist statism in post WWII Germany.
Unfortunately the Berlin Wall.
WWI reparations were initially assessed by the Treaty of Versailles (1919) https://en.wikipedia.org/wiki/World_War_I_reparations
Dulles, Dawes Plan > Results: https://en.wikipedia.org/wiki/Dawes_Plan
> Dawes won the 1925 Nobel Prize, WWI reparations obligations were reduced
Then the US was lending them money and steel because it was so bad there, and then we learned they had been building tanks and bombs with our money instead of railroads and peaceful jobs.
Business collaboration with Nazi Germany > British, Swiss, US, Argentinian and Canadian banks: https://en.wikipedia.org/wiki/Business_collaboration_with_Na...
And then the free money rug was pulled out from under them, and then the ethnic group wouldn't sell their paintings to help pay the debts of the war and subsequent central economic mismanagement.
And then they invaded various continents, overextended themselves when they weren't successfully managing their own country's economy, and the Allied powers eventually found the art (and gold) and dropped the bomb developed by various ethnic groups in the desert and that was that.
Except for then WWII reparations: https://en.wikipedia.org/wiki/World_War_II_reparations
The US still occupies or inhabits Germany, which is Russia's neighbor.
Trump was $400 million in debt to Deutsche Bank AG (of Germany and Russia now) and had to underwrite said loan himself due to prior defaults. Nobody but Deutsche Bank would loan Trump (Trump Vodka, University,) money prior to 2016. Also Russian state banks like VEB, they forgot to mention.
Business projects of Donald Trump in Russia > Timeline of Trump business activities related to Russia: https://en.m.wikipedia.org/wiki/Business_projects_of_Donald_...
It looks like - despite attempted bribes of foreign heads of state with free apartment - there will not be a Trump Tower Moscow.
"Biden halts Trump-ordered US troops cuts in Germany" (2021) https://apnews.com/article/joe-biden-donald-trump-military-f...
> art (and gold)
"The Monuments Men" (2014) https://en.wikipedia.org/wiki/The_Monuments_Men
You could argue that’s the trauma expressing itself through derivative real experiences into the future.
It is the purpose of criminal and civil procedure to force parties that caused loss, violence, death and trauma to pay for their offenses.
We have a court system that is supposed to abide Due Process so that there are costs to inflicting trauma (without perpetuating a vicious cycle).
[deleted]
Surgery implants tooth material in eye as scaffolding for lens
/? eye transplant https://hn.algolia.com/?q=eye+transplant
https://scholar.google.com/scholar?q=related:ZlcYhwhYqiUJ:sc...
From "Clinical and Scientific Considerations for Whole Eye Transplantation: An Ophthalmologist's Perspective" (2025) https://tvst.arvojournals.org/article.aspx?articleid=2802568 :
> Whereas advances in gene therapy, neurotrophic factor administration, and electric field stimulation have shown promise in preclinical optic nerve crush injury models, researchers have yet to demonstrate efficacy in optic nerve transection models—a model that more closely mimics WET. Moreover, directing long-distance axon growth past the optic chiasm is still challenging and has only been shown by a handful of approaches. [5–8]
> Another consideration is that even if RGC axons could jump across the severed nerve ending, it would be impossible to guarantee maintenance of the retinal-cortical map. For example, if the left eye were shifted clockwise during nerve coaptation, RGCs in the superior-nasal quadrant of donor retinas would end up synapsing with superior-temporal neurons in the host's geniculate nucleus. This limitation also plagues RGC-specific transplantation approaches; its effect on vision restoration is unknown.
"Combined Whole Eye and Face Transplant: Microsurgical Strategy and 1-Year Clinical Course" (2024) https://pubmed.ncbi.nlm.nih.gov/39250113/ :
> Abstract: [...] Serial electroretinography confirmed retinal responses to light in the transplanted eye. Using structural and functional magnetic resonance imaging, the integrity of the transplanted visual pathways and potential occipital cortical response to light stimulation of the transplanted eye was demonstrated. At 1 year post transplant (postoperative day 366), there was no perception of light in the transplanted eye.
"Technical Feasibility of Whole-eye Vascular Composite Allotransplantation: A Systematic Review" (2023) https://journals.lww.com/prsgo/fulltext/2023/04000/Technical... :
> With nervous coaptation, 82.9% of retinas had positive electroretinogram signals after surgery, indicating functional retinal cells after transplantation. Results on optic nerve function were inconclusive. Ocular-motor functionality was rarely addressed.
How to target NGF(s) to the optic nerve?
Magnets? RF convergence?
How to resect allotransplant and allograft optic nerve tissue?
How to stimulate neuronal growth in general?
Near-infrared stimulates neuronal growth and also there's red light therapy.
Nanotransfection stimulates tissue growth by in-vivo stroma reprogramming.
How to understand the optic nerve portion of the connectome?
The Visual and Auditory cortices are observed to be hierarchical.
Near-field imaging of [optic] nerves better than standard VEP Visual Evoked Potential tests would enable optimization of [optic nerve] transection.
VEP: https://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked...
Ophthalmologic science is important because - while it's possible to fight oxidation and aging - our eyes go.
Upper-atmospheric radiation is terrible on eyes. This could be a job for space medicine, and pilots.
Accomodating IOLs that resist UV damage better than natural tissue: Ocumetics
From "Portable low-field MRI scanners could revolutionize medical imaging" (2023) https://news.ycombinator.com/item?id=34990738 :
> Is MRI-level neuroimaging possible with just NIRS Near-Infrared Spectroscopy?
From "Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35886145 :
> So, to run the same [fMRI, NIRS,] stimulus response activation observation/burn-in again weeks or months later with the same subjects is likely necessary given Representational drift.
"Reversible optical data storage below the diffraction limit (2023)" [at cryogenic temperatures] https://news.ycombinator.com/item?id=38528844 :
> [...] have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
Optical tweezers operating below the Abbe diffraction limit are probably of use in resecting neurovascular tissue in the optic nerve (the retina and visual cortex)?
"Real-space nanophotonic field manipulation using non-perturbative light–matter coupling" (2023) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-1-1... :
> "One can write, erase, and rewrite an infinite number of times,"*
"Retinoid restores eye-specific brain responses in mice with retinal degeneration" (2022) https://news.ycombinator.com/item?id=33129531
Fluoxetine increases plasticity in the adult visual cortex; https://news.ycombinator.com/item?id=43079501
Zebrafish can regrow eyes,
From the "What if Eye...?" virtual eyes in a petri dish simulation: https://news.ycombinator.com/item?id=43044958 :
> [ mTor in Axolotls, ]
"Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration" (2023) https://neurosciencenews.com/vision-restoration-genetic-2318... :
> “What’s interesting is that these Müller cells are known to reactivate and regenerate retina in fish,” she said. “But in mammals, including humans, they don’t normally do so, not after injury or disease. And we don’t yet fully understand why.”
/? Regenerative medicine for ophthalmologic applications
Medical treatments devised for war can quickly be implemented in US hospitals
> Our research, and that of others, found that too much oxygen can actually be harmful. Excess oxygen triggers oxidative stress – an overload of unstable molecules called free radicals that can damage healthy cells. That can lead to more inflammation, slower healing and even organ failure.
> In short, while oxygen is essential, more isn’t always better. [...]
> We discovered that severely injured patients often require less oxygen than previously believed. In fact, little or no supplemental oxygen is needed to safely care for 95% of these patients
Oxidative stress: https://en.wikipedia.org/wiki/Oxidative_stress
Antioxidant > Levels in food: https://en.wikipedia.org/wiki/Antioxidant#Levels_in_food
Anthocyanin antioxidants: https://www.google.com/search?q=anthocyanin+antioxidants
Deep sea divers know about oxygen toxicity;
Trimix (breathing gas) https://en.wikipedia.org/wiki/Trimix_(breathing_gas) :
> With a mixture of three gases it is possible to create mixes suitable for different depths or purposes by adjusting the proportions of each gas. Oxygen content can be optimised for the depth to limit the risk of toxicity, and the inert component balanced between nitrogen (which is cheap but narcotic) and helium (which is not narcotic and reduces work of breathing, but is more expensive and can increase heat loss).
> The mixture of helium and oxygen with a 0% nitrogen content is generally known as heliox. This is frequently used as a breathing gas in deep commercial diving operations, where it is often recycled to save the expensive helium component. Analysis of two-component gases is much simpler than three-component gases.
HFNC (High-Flow Nasal Cannula) breathing tube therapy is recommended by various medical guidelines.
HFNC and prone positioning is one treatment protocol for COVID and ARDS Acute Respiratory Distress Syndrome: you put them on their stomach and give them a breathing tube (instead of a ventilator on their backs).
Which treatment protocols and guidelines should be updated given these findings?
For which conditions is HFNC therapy advisable given these findings?
Heated humidified high-flow therapy: https://en.wikipedia.org/wiki/Heated_humidified_high-flow_th...
Netboot Windows 11 with iSCSI and iPXE
Hey Terin! Nice post!
I also netboot Windows this way! To run a 20 machines in my house off the same base disk image, which we use for LAN parties. I have code and an extensive guide on GitHub:
https://github.com/kentonv/lanparty
It looks like you actually figured out something I failed at, though: installing Windows directly over iSCSI from the start. I instead installed to a local device, and then transferred the disk image to the server. I knew that building a WinPE environment with the right network drivers would probably help here, but I got frustrated trying to use the WinPE tools, which seemed to require learning a lot of obscure CLI commands (ironically, being Windows...).
You observed some slowness using Windows over the network. I did too, when I was doing it with 1G LAN, but I've found on 10G it pretty much feels the same as local.
BTW, a frustrating thing: The Windows 10->11 updater also seemingly fails to include network drivers and so you can't just upgrade over iSCSI. I'm still stuck on Windows 10 so I'm going to have to reinstall everything from scratch sometime this year. Maybe I'll follow your guide to use WinPE this time.
Hey Kenton!
I figured you had done something similar with the LAN Party House. If I hadn't figured it out I was going to ask/look for your setup.
> You observed some slowness using Windows over the network.
Mini-ITX makes it a bit difficult to upgrade to 10GbE (only one PCIe slot!), and the slowness isn't bad enough in-game to deal with upgrading it just yet.
> BTW, a frustrating thing: The Windows 10->11 updater also seemingly fails to include network drivers and so you can't just upgrade over iSCSI.
I've read (and also observed, now) that if you install directly on iSCSI Windows doesn't make the recovery partition. This evidently also breaks 10->11 upgrades.
God bless you for this sir. I've been wanting to get Windows iscsi boot working, but there's always one more thing. Did you get anything else fun working with ipxe? all the exampled online seem so outdated.
If you get (i)pxe running, you can chain to https://netboot.xyz/ which lets you boot lots of open source stuff.
It's a bit of a mixed bag, because pxe environments have a way of not always being useful. On bios boot, there's tools from isolinux to memory load disk images and hook the bios calls... but if your OS of choice doesn't use bios calls for storage, it needs a driver that can find the disk image in memory.
For uefi boot, there's not a good way to do this, supposedly some uefi environments can load disk images from the network, but afaik, it's not something you can do from ipxe. Instead, for UEFI, the netboot.xyz folks have some other approaches; typically fetching the kernel and initrd separately or otherwise repackaging things rather than using official ISO images.
And I've run into lots of cases where while pxe seems to work, maybe the keyboard doesn't work in pxe, or something else doesn't get properly initialized and you end up having a better time if you give up and boot from USB.
System Rescue CD and Clonezilla are PXE-bootable.
"OneFileLinux: A 20MB Alpine metadistro that fits into the ESP" https://news.ycombinator.com/item?id=40915199 :
> Ventoy, signed EFIstubs, USI, UKI
TIL about https://netboot.xyz/
Elephant in the room: Quantum computers will destroy Bitcoin
Someone had to say it. Maybe the current drop is normies finally waking up and realizing that extrapolated accelerating developments in quantum computers will break encryption used in Bitcoin within 5 years.
It's also extremely naive to assume it will be easy to transfer a massive decentralized project to a post-quantum algorithm. Maybe new cryptos will be invented, but Bitcoin will not "retain value".
Things that will retain value if the entire internet is broken due to rapid deployment of quantum computers will be:
- Real estate
- Physical assets (gold, silver, etc)
- Physical stock certificates (printed on actual paper)
- Paper money
Since internet, cards, finance may just stop functioning one day as quantum computers break all encryption.
Feel free to prove me wrong.
There is a pending hard fork to PQ Post Quantum algorithms for all classical blockchains.
There will likely be different character lengths for account addresses and keys, so all of the DNS+HTTP web services and HTTP web forms built on top will need different form validation.
Vitalik Buterin presented on this subject a few years ago. Doubling key sizes may or may not be sufficient to limit the risk of quantum attacks on elliptical curve encryption algorithms employed by Bitcoin and many other DLTs.
The Chromium browser now supports the ML-KEM (Kyber) PQ cipher.
Very few web servers have PQ ciphers enabled. It is as simple as changing a text configuration file to specify a different cipher on the webserver, once the ciphers are tested by time and money.
There are patched versions of OpenSSH server, for example, but PQ support is not yet merged in core there yet either.
There are PQ ciphers and there are PQ cryptographic hashes.
There are already PQ-resistant blockchains.
Should Bitcoin hard fork to double key sizes or to implement a PQ cipher and hash?
Spelunking for Bitcoin by generating all possible keys and checking their account balances is not prevented by PQ algorithms.
Banking and Finance and Critical Infrastructure also need to upgrade to PQ ciphers. Like mining rigs, it is unlikely that existing devices can be upgraded with PQ software; we will need to buy new devices and recycle existing non-PQ devices.
If banks are on a 5 year IT refresh cycle, that means they need to be planning to upgrade everything to PQ 5 years or more before a QC quantum computer of a sufficient number of error-corrected qubits is online for adversaries that steal cryptoassets from people on the internet.
> There is a pending hard fork to PQ Post Quantum algorithms for all classical blockchains.
Where is this pending HF for BTC?
How can all of this work realistically without constant cat & mouse catch up game?
> Spelunking for Bitcoin by generating all possible keys and checking their account balances is not prevented by PQ algorithms.
So there will be no protection against this? There is no protection possible?
Is there a PR yet, and also a BIP?
Other DLTs also have numbered document procedures for managing soft forks, hard forks, and changes to cipher and hash algorithms.
Litecoin, for example, is Bitcoin with the scrypt hash algorithm instead of SHA256.
Proof-of-Work mining rigs implement hashing algorithms in hardware as ASICs and FPGAs.
Stellar and Ripplenet validation servers still implement same in software FWIU?
Are there already ASICs for any of the PQ Hash and Cipher algorithms?
SSL Termination is expensive but necessary for HTTPS Everywhere and now HTTP STS Strict-Transport-Security headers and the HSTS preload list.
Are there still ASIC or FPGA SSL accelerator cards that need to implement PQ ciphers and hashes?
Multisig and similar m:n smart contracts support requiring more keys for a transaction to complete.
Rather than in an account with one sk/pk pair, funds can be stored in escrow such that various conditions must be met before a transaction can move the funds.
Running a full node helps the network by keeping another copy online and synchronized. A full node can optionally also index by transaction id (with LevelDB).
The block and transaction messages can be logged and monitored. Though it doesn't cost anything to check a balance given authorized or forged keys.
Spending the money in a bitcoin account discloses the public key, whereas only the double hash of the pubkey is necessary to send money to an account or scriptHash.
The market does not appear to cost infosec value, risk, or technical debt into cryptoasset prices.
PQ or non-PQ does not predict asset price in 2025-02.
Breach disclosures apparently hardly affect asset prices; which is unfortunate if we want them to limit their and our taxable losses.
An Experimental Study of Bitmap Compression vs. Inverted List Compression
ScholarlyArticle: "An Experimental Study of Bitmap Compression vs. Inverted List Compression" (2017) https://dl.acm.org/doi/10.1145/3035918.3064007
Inverted index > Compression: https://en.wikipedia.org/wiki/Inverted_index#Compression :
> For historical reasons, inverted list compression and bitmap compression were developed as separate lines of research, and only later were recognized as solving essentially the same problem. [7]
> and only later were recognized as solving essentially the same problem. [7]
"Hard problems that reduce to document ranking" https://news.ycombinator.com/item?id=43174910#43175540
Ctrl-F "zoo" https://westurner.github.io/hnlog/#comment-36839925 #:~:text=zoo :
> Complexity Zoo, Quantum Algorithm Zoo, Neural Network Zoo
Programming Language Zoo https://plzoo.andrej.com/
IBM completes acquisition of HashiCorp
Hashicorp blog post: https://www.hashicorp.com/en/blog/hashicorp-officially-joins...
Jeff Bezos' revamp of 'Washington Post' opinions leads editor to quit
Here's a post linking to a BBC article about same that was flagged and censored here: https://news.ycombinator.com/item?id=43191562 ;
> the newspaper's opinion section will focus on supporting “personal liberties and free markets",
Free trade! Fair trade!
> and pieces opposing those views will not be published.
Boo, fascist corporate oligarchical censorship!
You might say, "actually that's not fascism, Bob" because fascism is when the government exercises control over the non-government-held corporations.
Fascism is like domming Apple's DEI policies when without Congress you can't make law, or telling people they should buy TikTok for you, with your name on all the checks.
Fascism: https://en.wikipedia.org/wiki/Fascism
I think the argument is that since the US has legal and well-established corruption there's no difference. Meaning, a billionaire is easily more influential in legislation than a legislator or even multiple legislators, so he is now inseparable from government and is an agent of the government.
At least they've disclosed their intent to impose editorial bias on the opinion section. It doesn't say "Fair and Balanced."
From "Fed to ban policymakers from owning individual stocks" (2021) https://news.ycombinator.com/item?id=28951646 :
> "Blind Trust" > "Use by US government officials to avoid conflicts of interest" https://en.wikipedia.org/wiki/Blind_trust :
>> The US federal government recognizes the "qualified blind trust" (QBT), as defined by the Ethics in Government Act and related regulations.[1] In order for a blind trust to be a QBT, the trustee must not be affiliated with, associated with, related to, or subject to the control or influence of the government official.
>> Because the assets initially placed in the QBT are known to the government official (who is both creator and beneficiary of the trust), these assets continue to pose a potential conflict of interest until they have been sold (or reduced to a value less than $1,000). New assets purchased by the trustee will not be disclosed to the government official, so they will not pose a conflict.
The Ethics in Government Act which created OGE was passed by Congress in 1978 in response to Watergate: https://en.wikipedia.org/wiki/Ethics_in_Government_Act
Should Type Theory (HoTT) Replace (ZFC) Set Theory as the Foundation of Math?
In a practical sense, hasn't type theory already replaced ZFC in the foundations of math? Lean is what working mathematicians currently use to formally prove theorems, and Lean is based on type theory rather than ZFC.
But HoTT is removed from lean core?
From https://news.ycombinator.com/item?id=42440016#42444882 :
> /? Hott in lean4 https://www.google.com/search?q=hott+in+lean4
https://github.com/forked-from-1kasper/ground_zero :
> Lean 4 HoTT Library
"Should Type Theory Replace Set Theory as the Foundation of Mathematics?" (2023) https://link.springer.com/article/10.1007/s10516-023-09676-0
Show HN: Probly – Spreadsheets, Python, and AI in the browser
Probly was built to reduce context-switching between spreadsheet applications, Python notebooks, and AI tools. It’s a simple spreadsheet that lets you talk to your data. Need pandas analysis? Just ask in plain English, and the code runs right in your browser. Want a chart? Just ask.
While there are tools available in this space like TheBricks, Probly is a minimalist, open-source solution built with React, TypeScript, Next.js, Handsontable, Hyperformula, Apache Echarts, OpenAI, and Pyodide. It's still a work in progress, but it's already useful for my daily tasks.
TIL that Apache Echarts can generate WAI-ARIA accessible textual descriptions for charts and supports WebGL. https://echarts.apache.org/en/feature.html#aria
apache/echarts: https://github.com/apache/echarts
Marimo notebook has functionality like rxpy and ipyflow to auto-reexecute input cell dependencies fwiu: https://news.ycombinator.com/item?id=41404681#41406570 .. https://github.com/marimo-team/marimo/releases/tag/0.8.4 :
> With this release, it's now possible to create standalone notebook files that have package requirements embedded in them as a comment, using PEP 723's inline metadata
marimo-team/marimo: https://github.com/marimo-team/marimo
ipywidgets is another way to build event-based UIs in otherwise Reproducible notebooks.
datasette-lite doesn't yet work with jupyterlite and emscripten-forge yet FWIU; but does build SQLite in WASM with pyodide. https://github.com/simonw/datasette-lite
pygwalker: https://github.com/Kanaries/pygwalker .. https://news.ycombinator.com/item?id=35895899
How do you record manual interactions with ui controls and spreadsheet grids to code for reproducibility?
> "Generate code from GUI interactions; State restoration & Undo" https://github.com/Kanaries/pygwalker/issues/90
> The Scientific Method is testing, so testing (tests, assertions, fixtures) should be core to any scientific workflow system.
ipytest has a %%ipytest cell magic to run functions that start with test_ and subclasses of unittest.TestCase with the pytest test runner. https://github.com/chmp/ipytest
How can test functions with assertions be written with Probly?
Probly doesn't have built-in test assertion functionality yet, but since it runs Python (via Pyodide) directly in the browser, you can write test functions with assertions in your Python code. The execute_python_code tool in our system can run any valid Python code, including test functions.
This is something we're considering for future development, so this is a great shout!
To have tests that can be copied or exported into a .py module from a notebook is advantageous for prototyping and reusability.
There are exploratory/discovery and explanatory forms and workflows for notebooks.
A typical notebook workflow: get it working with Ctrl-Enter and manually checking output, wrap it in a function(s) with defined variable scopes and few module/notebook globals, write a test function for the function which checks the output every time, write markdown and/or docstrings, and then what of this can be reused from regular modules.
nbdev has an 'export a notebook input cell to a .py module' feature. And formatted docstrings like sphinx apidoc but in notebooks. IPython has `%psource module.py` for pygments-style syntax highlighting of external .py modules and `%psave output.py` for saving an input cell to a file, but there are not yet IPython magics to read from or write to certain lines within a file like nbdev.
To run the chmp/ipytest %%ipytest cell magic with line or branch coverage, it's necessary to `%pip install ipytest pytest-cov` (or `%conda install ipytest pytest-cov`)
jupyter-xeus supports environment.yml with jupyterlite with packages from emscripten-forge: https://jupyterlite-xeus.readthedocs.io/en/latest/environmen...
emscripten-forge src: https://github.com/emscripten-forge/recipes/tree/main/recipe... .. web: https://repo.mamba.pm/emscripten-forge
A Systematic Review of Quantum Computing in Finance and Blockchains
"From Portfolio Optimization to Quantum Blockchain and Security: A Systematic Review of Quantum Computing in Finance" (2023) https://arxiv.org/abs/2307.01155
The FFT Strikes Back: An Efficient Alternative to Self-Attention
Google introduced this idea in 2022 with "FNet: Mixing Tokens with Fourier Transforms" [0].
Later they found out that, performance of their TPU(s) for matrix multiplication was faster than FFT in the most scenarios.
That seems like an odd comparison, specialty hardware is often better, right?
Hey, do DSPs have special hardware to help with FFTs? (I’m actually asking, this isn’t a rhetorical question, I haven’t used one of the things but it seems like it could vaguely be helpful).
(Discrete) Fast Fourier Transform implementations:
https://fftw.org/ ; FFTW: https://en.wikipedia.org/wiki/FFTW
gh topic: fftw: https://github.com/topics/fftw
xtensor-stack/xtensor-fftw is similar to numpy.fft: https://github.com/xtensor-stack/xtensor-fftw
Nvidia CuFFTW, and/amd-fftw, Intel MKL FFTW
NVIDIA CuFFT (GPU FFT) https://docs.nvidia.com/cuda/cufft/index.html
ROCm/rocFFT (GPU FFT) https://github.com/ROCm/rocFFT .. docs: https://rocm.docs.amd.com/projects/rocFFT/en/latest/
AMD FFT, Intel FFT: https://www.google.com/search?q=AMD+FFT , https://www.google.com/search?q=Intel+FFT
project-gemmi/benchmarking-fft: https://github.com/project-gemmi/benchmarking-fft
"An FFT Accelerator Using Deeply-coupled RISC-V Instruction Set Extension for Arbitrary Number of Points" (2023) https://ieeexplore.ieee.org/document/10265722 :
> with data loading from either specially designed vector registers (V-mode) or RAM off-the-core (R-mode). The evaluation shows the proposed FFT acceleration scheme achieves a performance gain of 118 times in V-mode and 6.5 times in R-mode respectively, with only 16% power consumption required as compared to the vanilla NutShell RISC-V microprocessor
"CSIFA: A Configurable SRAM-based In-Memory FFT Accelerator" (2024) https://ieeexplore.ieee.org/abstract/document/10631146
/? dsp hardware FFT: https://www.google.com/search?q=dsp+hardware+fft
Ask HN: Who's been picking up trade deals due to US tariff threats?
Trump claimed tariffs are necessary due to the drug war wartime authorizations and threatened our immediate neighbors and BRICS with tariffs this year.
Which countries are winning trade and tech deals while the US is forcing itself to pay tariffs on imports to collectively punish others without due process?
Example: China is now buying Canadian oil (instead of Russian oil at a discount due to sanctions)
/? canada tariffs: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
"Trump tariffs inspire economic patriotism in Canada" https://news.ycombinator.com/item?id=42922237
FWIU they have regressed from free trade all the way to tariffs on Canada at 25% on everything but oil, which is at 10%?
And they uncancelled and re-approved the Keystone XL tar sands midwest plains liability.
As informal as it might seem listing places where the US might start losing out on trade, given the current political situation, the last thing needed is any simple list that might fuel narcissistic outrage when it might threaten the golden one's genius plan, especially later on when (if?) it doesn't pan out out and a new round of allocating blame and punishment is engaged. One can not expect such people to innately understand why some reactions occur.
Just look at Musk's outburst after he burnt tw-tter and the long term advertisers figured the people they were selling to would probably have moved on to other platforms and likewise pulled advertising from the new mess.
All we (the rest of the world) can do is hope that the initial tariffs imposed on trade will be enough to look like they're having a win.
"EU and Mexico revive stalled trade deal as Trump tariffs loom" https://www.reuters.com/world/eu-mexico-revive-stalled-trade...
FWIU the President of Mexico told Trump that they would be more concerned about migrants crossing their northern border if the US were more concerned about arms trafficked to Mexico.
That's sort of funny but then again IMO, the biggest difference to immigration rate and would be something the US doesn't seem to have - a fixed rate of minimum pay regardless of whether people are US born citizens or not.
Basic income experiments here have not succeeded FWIU; https://hn.algolia.com/?q=basic+income
They did not pay for the relief checks signed with his personal brand and the record relief loan fraud by cutting taxes; so that's still not paid off either.
Recipe for national debt: increase expenses and cut revenue.
"Starve the beast" said the Reaganites who increased taxes on the middle class and increased foreign war expenditures and future obligations: https://en.wikipedia.org/wiki/Starve_the_beast
"Where do people get that Reagan raised taxes 11 times? I don’t completely understand this." https://www.quora.com/Where-do-people-get-that-Reagan-raised...
"The Mostly Forgotten Tax Increases of 1982-1993" https://www.bloomberg.com/view/articles/2017-12-15/the-mostl...
Total Tax Receipts as a % of GDP would tell the story; but they also increased the debt limit 18 times.
"Federal Receipts as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/FYFRGDA188S
"Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
Combined chart: https://fred.stlouisfed.org/graph/?g=1DTZx
History of the US debt ceiling: https://en.wikipedia.org/wiki/History_of_the_United_States_d...
I don't think that debt financing onto future generations and starting conflicts that have since cost trillions made America great.
Would improving conditions in their home country change the immigration rate? But won't there still be climate refugees? It's hot, there's no water, there's no work.
Just watched an alarmed presentation on U-3 and U-6 as indicators of unemployment rate: U-6 includes underemployment at less than 25K/yr.
Ah sorry for not being to the point as I forget just how many systems there are globally - I meant minimum wage payments.
The effect would not be immediate and would take years, though a tax break for businesses for each US citizen on their payroll would surely help speed it up. I have known too many US based people who moan about the influx but happy to have jobs done by illegals at a really dirt cheap rate.
As as additional note, Australia had this fair wage pay as law for a great number of years, but somewhere along the line people plain forgot. Back in the 70s the country used to have Aussie based workers that travelled the entire country, north to south and back again to follow work as seasonal workers in agriculture. However the process for hiring an employee was a handful and red tape most DIY accounting folks didn't like dealing with so much, meant the option of simplified payment steam to a company providing labour was and still is very attractive, even though some of the procedures has been relaxed. (Also until a decade or two ago, for years the employer would be paying into a black hole fund for worker's compensation, even if their crop was wiped out or for some other reason would not be needing any additional workers.) In the 90s I got to witness all sorts of BS schemes to get non aussie workers into the work force, like 457 visas so companies could fill jobs that they couldn't fill locally. Eventually everyone and their dog cottoned on to the shenanigans and ... for the most part a lot went by the way side. We still have noise here in Aussie land how backpackers save this and that because Aussies are lazy and what not - nope, most here growing up in the heat know working in conditions too hot or some other unsafe instance have long term consequences, might be 5 years down the track and the kidney damage finally reaches a point it needs to be addressed. Backpackers (tourists with a work visa who travel and work their way around a country) are typically long gone back in their home land, so they're ideal for scam labour companies and the odd no moral compass farmer.
Andrew Yang (and Sam Altman) probably have a piece to say about basic income / wage subsidies in context to the robots and sophisticated AI taking our jobs and tiny houses.
Migrant labor exploitation? What would our food cost? How many people does it take to mow a sand lawn?
I hope that the current hate for immigrants isn't much more than divisive political fearmongering and splitting that will diminish when they regain their humility and self respect due to food prices and having a conscience.
Creating jobs on the other side of the border would probably be more cost-effective.
IDK, "Terminator: Dark Fate", "The Big Green", "Amistad", "McFarland, USA"
H1B competition in tech is fierce here, and the reason we don't get hired in our own country.
Indiar and Chinar have more gifted students than we have students.
Americans won't work the fields anymore; we'll pay Asia to manufacture robots and learn sustainable farming practices like no-till farming later.
The new sustainable aviation fuel subsidy (of the previous administration) requires sustainable farming practices so that we're not subsidizing unrealized losses due soil depletion. It doesn't make sense to pay them to waste the land without consideration.
Your success with fighting desertification is inspiring. We're still not sure why the Colorado river is running dry for the southwest where it's always been dry and carelessly farmed and drilled. TIL about bunds and preventing water from flashing off due to impacted soil trashed by years of tilling and overgrazing.
I've a few friends who've traveled to Oceania for school and work.
There should be an Australian "The Office".
Welfare economics: https://en.wikipedia.org/wiki/Welfare_economics
I would never consider a fair minimum wage which is regardless of citizen or non citizenship as a subsidy.
As for supporting less developed countries -- around 1975 the formation of the Lima agreement[1] came to pass, a United Nations strategy to shift some manufacturing to less developed countries and share the wealth. I suspect that's yet another reason why some US companies expanded into South American countries.
As for A.I. bots taking over our simple repetitive jobs or manual jobs like in construction duties, due to complexity and unexpected job functions it's not something that's going to be worth pursing too hard. Construction tasks were the sort of jobs I was referencing that some people liked getting for rock bottom dollar by way of desperate immigrants and though people do greatly benefit from really low rates when its their own money and mostly DIY projects, a bot replacement is a novel idea but ... A.I. General Practice doctor bot will exist long (years) before that, data for medicating and solving health issues has a much much larger data set existing over at least 50 years of information which most is still sort of relevant. As for construction - well obviously one meets plenty of people think it's fairly simple and a few lines in a DIY book has it covered. Yes the minimum wage would mean the nature and scope of some people's personal (back yard) projects would change. Small business would obviously be impacted but a smart govt would have in place something to balance out having to pay higher wages. Over all it's better people can work if they want to.
Food prices generally should not have a high labour wage component when it leaves the farm gate - unless it's a back yard hobby situation. If a whooping big farm is crying foul labour minimum wage rates are too high, so high they can't make a profit, 9/10 times they're doing something wrong or ... they got /(are getting) fleeced when they sold / sell their produce to the markets. The act of fleecing a farmer or farm group is IMO, immoral.
Actually (given the level of interest by a few people I've met or know) I would think a lot of people would love to work in the fields, so long as good farming practices are observed ie safe and not half killing themselves so by days end they are done for the day, they are paid at decent and liveable wage, (ie not slave labour rates) and there's time for themselves with perhaps hour travel to local affordable further / higher education they can participate in part time if they are inclined to do so.
We also hear the same -- in that Aussies won't or don't want to work on a farm because [insert some BS excuse here] and it's fortunate there are backpackers (young tourists) who are prepared ... if the job is straight, pays ok, half decent conditions - there will be no end of willing home borne applicants.
As for sustainable aviation fuel subsidy - I would guess for most people it implies a focus on more obvious oil seed crops on otherwise good agricultural land which could be utilised for food or fibre production. But I think the trick is to work out what oil producing crops could exist in marginal country, moreover a tree crop where it's feasible to drip irrigate or under tree root so that the very limited water isn't lost evaporating from the soil surface.
As for tackling desertification, sustainability is a new mantra in the farming sector. I don't iirc any significant projects aimed at revegetating any large tracks of arid land I'd call a success. (There are though property owners who have done wonders to rehabilitate flogged country so it becomes more productive whilst ensuing a diverse landscape.) However for fighting desertification, I do recall 90s or so a very good example of the benefits of Permaculture as per Bill Mollison [2] rehabilitating a small knob / small hill of ground somewhere iirc in Africa back into a scene of lush greenery which might have even included some banana trees, whilst the rest of the landscape was bleak and dry with a minimum of grassy vegetation.
>There should be an Australian "The Office".
Never really got into either versions of The Office however something in the same vein but a little over the top is a workplace based Aussie show -- Swift and Shift Couriers[3]
[1] https://www.unido.org/sites/default/files/2014-04/Lima_Decla... [pdf]
I looked it up:
Wage subsidy: https://en.wikipedia.org/wiki/Wage_subsidy :
> A wage subsidy is a payment to workers by the state, made either directly or through their employers. Its purposes are to redistribute income and to obviate the welfare trap attributed to other forms of relief, thereby reducing unemployment. It is most naturally implemented as a modification to the income tax system.
Permaculture and Bill Mollison: perpetual spinach and Swiss Chard do well here. TIL there are videos about perennials and about companion planting. Potatoes in the soil below tomatoes, in food safe HDPE 5 gallon buckets with a patch cut into the side. But that's still plastic in the garden. Three Sisters: Corn, Beans, Squash.
Import substitution industrialization > Latin America,: https://en.wikipedia.org/wiki/Import_substitution_industrial...
"The Biggest Mapping Mistake of All Time" re: Baja and the Sea of Cortez: https://youtube.com/watch?v=Hcq9_Tw2eTE&
Confessions of an Economic Hit Man is derided as untrue. See also links to "War is a Racket", which concludes with chapter "5. To Hell With War" https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit...
Structural adjustment > Criticisms: https://en.wikipedia.org/wiki/Structural_adjustment
Golden rule: https://en.wikipedia.org/wiki/Golden_Rule
The "Farming Simulator" game has the user operate multiple labor positions on the farm as concurrently as they can master.
We can't do similar backpackers-style field labor because there aren't travel visas that are also work visas FWIU.
Volunteering while on a student visa is fine AFAIU. Non-commercial open source software development is also something that folks enjoying their travels can do for no money here.
Laser weeding is still fairly underutilized. There's not yet an organic standard for herb. Nicotine is an approved "organic* pesticide, for example.
Here we have Arbor Day Foundation, which will ship 10 free saplings for an easy annual tree survey.
TIL trees can be shipped domestically in a trianglar cardboard carton, in a plastic bag, with a damp sheet of paper around their roots.
It may be the work of Greening Australia that I've heard about: https://en.wikipedia.org/wiki/Greening_Australia
One of a number of videos about planting trees on YouTube: "Australia's Remarkable Desert Regreening Project Success" https://youtube.com/watch?v=HRo0RG02Mg0&
I've heard that China has changed their approach to fighting desertification in the Gobi desert; toward a healthy diversity of undergrowth instead of a monoculture of trees; ecology instead of just planting rows of trees that won't last by comparison.
Half Moon Bunds look like they are working to regreen the Sahel. Bunds would require different farm machines than rototilling tractors that cause erosion.
"Mastering the Art of Half Moon Bunds" https://youtube.com/watch?v=XyH6dFlv9dk&
Andrew Million's light board videos are a great resource, as a software dev with no real experience in hydrological engineering for sustainable agriculture.
"Inside Africa's Food Forest Mega-Project" https://youtube.com/watch?v=xbBdIG--b58&
"Flight of the Conchords" (HBO) and "Eagle vs Shark" are from NZ.
From YouTube playlists: Derek from Veritasium, The Slow Mo Guys, The Engineering Mindset, Morello, Jade from upandatom, Sarah from Chuck, and Synthony are all from AU as I recall.
"Be Kind Rewind" (2008) https://en.wikipedia.org/wiki/Be_Kind_Rewind
The basic Minimum wage [1] is a wage which doesn't need to be subsidised. Only when as I described eliminating dirt poor / poverty line pay rates from a region that had relied on it, would any govt perhaps need to offer a subsidy to employers to swallow the bitter pill of paying the fair rate to all workers. In Australia employers are not entitled to any payment or entitlement from govt or other entities to continue to employ a long term employee earning an existing wage in their employment. Some big companies in the past have had a bit of a whine and gone for a grab for getting some sort of a handout but -- after the last major abuse where the govt helped a couple poor companies out and got burnt when the sods moved overseas or restructured effectively ending the jobs at risk that were the bargaining chip in the first instance ...
I enjoy meeting the few backpackers that visit my region and I've no issue with them working, but for a very long time I've informed any that I crossed paths with, that they are entitled to the same pay rate as anyone else here, ie don't get scammed -- iirc, all of them had been getting scammed one way or another.
I dislike (very understated) the majority of labour hire companies that were around, that arrange the on farm work as they have in the past, actively discriminated against regular aussies looking for a job - since the aussie is more often likely to know what is right and wrong, what is not acceptable and thus the labour hirer's long running scam would soon come under threat - took time but now govt here is looking at any bad contractors, but we still hear noise pieces chirping that aussies are lazy ... and ...
I am surprised in regard to no US visa that addresses backpakers -- I'd thought in the free western world that the small fraction of tourists that wanted to backpack their way across the country, could do so if they wanted that option and meet the relevant criteria. As I understand, there are generally rules such as the prospective backpacker is not actually flat broke.
Trying to find a near to close no mechanical removal of weeds from a crop, that is cheap and sustainable is somewhat the holy grail of farming. Seems like simple might win out at the end though small robotic platforms that traverse the field ... existing models thus far I've seen are solar powered / electric, which detect, identify and then mechanically upset / remove the weed as it slowly travels its target paddock.
As for a massive project aimed at greening the Great Australian Desert [2] I'd not heard about it but I do not mean to imply it's not happening, Australia is actually a pretty big place. However the cover photo in the link suggests not desert proper but very marginal flat once cleared and felled, that after initial cropping flogged the soils, all it could be is poor farming country that is best rehabilitated and re-vegetated with trees and shrubs.
There was actually a lot of conservation / rehabilitation work carried out in 1920's once it became very clear that the farming practices of the time did not suit the geology and climate of this country.
I think as years progress here in Australia will see different farming systems come into play like the Alley cropping system.[3]
[1] https://en.wikipedia.org/wiki/Minimum_wage
[2] https://lifeboat.com/blog/2022/09/how-australia-is-regreenin...
[3] https://www.agrifarming.in/alley-cropping-system-functions-o...
Edit to fix links
Reaganomics > Policies: https://en.wikipedia.org/wiki/Reaganomics#Policies :
> The 1982 tax increase undid a third of the initial tax cut. In 1983 Reagan instituted a payroll tax increase on Social Security and Medicare hospital insurance. [25] In 1984 another bill was introduced that closed tax loopholes. According to tax historian Joseph Thorndike, the bills of 1982 and 1984 "constituted the biggest tax increase ever enacted during peacetime". [26]
Also,
>> said the Reaganites who increased taxes on the middle class and increased foreign war expenditures and future obligations:
To clarify, Reagan reduced taxes for the wealthiest the most, thus shifting the effective tax burden to the middle class and increasing inequality.
"Changes in poverty, income inequality, and the standard of living in the United States during the Reagan years" https://pubmed.ncbi.nlm.nih.gov/8500951/#:~:text=The%20rate%... :
> The rate of poverty at the end of Reagan's term was the same as in 1980. Cutbacks in income transfers during the Reagan years helped increase both poverty and inequality. Changes in tax policy helped increase inequality but reduced poverty.
Gini Index: https://en.wikipedia.org/wiki/Gini_coefficient
GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
"Make America Great Again": https://en.wikipedia.org/wiki/Make_America_Great_Again
At that time - during the 1970s and 1980s - Federal Debt as a percentage of GDP was much lower than it is today; 30-53% in the 1980s and 120% of GDP in 2024 :
> Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
The need for memory safety standards
> Looking forward, we're also seeing exciting and promising developments in hardware. Technologies like ARM's Memory Tagging Extension (MTE) and the Capability Hardware Enhanced RISC Instructions (CHERI) architecture offer a complementary defense, particularly for existing code.
IIRC there's some way that a Python C extension can accidentally disable the NX bit for the whole process.. https://news.ycombinator.com/item?id=40474510#40486181 :
>>> IIRC, with CPython the NX bit doesn't work when any imported C extension has nested functions / trampolines
>> How should CPython support the mseal() syscall? [which was merged in Linux kernel 6.10]
> We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process. In addition, as outlined in our Secure by Design whitepaper and in our memory safety strategy, we are deeply committed to building security into the foundation of our products and services.
> That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.
Secureblue; https://github.com/secureblue/Trivalent has hardened_malloc.
Memory safety notes and Wikipedia concept URIs: https://news.ycombinator.com/item?id=33563857
...
A graded memory safety standard is one aspect of security.
> Tailor memory safety requirements based on need: The framework should establish different levels of safety assurance, akin to SLSA levels, recognizing that different applications have different security needs and cost constraints. Similarly, we likely need distinct guidance for developing new systems and improving existing codebases. For instance, we probably do not need every single piece of code to be formally proven. This allows for tailored security, ensuring appropriate levels of memory safety for various contexts.
> Enable objective assessment: The framework should define clear criteria and potentially metrics for assessing memory safety and compliance with a given level of assurance. The goal would be to objectively compare the memory safety assurance of different software components or systems, much like we assess energy efficiency today. This will move us beyond subjective claims and towards objective and comparable security properties across products.
Piezoelectric Catalyst Destroys Forever Chemicals
> The system’s energy requirements vary depending on the wastewater’s conditions, such as contaminant concentration, water matrix, or the customer’s discharge requirements. In one of Oxyle’s full-scale units that treats 10 cubic meters per hour, energy consumption measured less than 1 kilowatt-hour per cubic meter, according to the company.
> Using the flow of water, rather than electricity, to activate the reaction makes the method far more energy efficient than other approaches, says Mushtaq.
Different approach with no energy/volume (kWhr/m^3) metric in the abstract:
"Photocatalytic C–F bond activation in small molecules and polyfluoroalkyl substances" (2024) https://www.nature.com/articles/s41586-024-08327-7 .. https://news.ycombinator.com/item?id=42444729
History of interactive theorem proving [pdf]
The most recent reference listed in this paper is "Truly modular (co)datatypes for Isabelle/HOL" (2014) .. https://scholar.google.com/scholar?cites=1837719396691256842...
Automated theorem proving > Related problems: https://en.wikipedia.org/wiki/Automated_theorem_proving#Rela...
awesome-code-llm > Coding for Reasoning: https://github.com/codefuse-ai/Awesome-Code-LLM#31-coding-fo...
New type of microscopy based on quantum sensors
"Optical widefield nuclear magnetic resonance microscopy" (2025) https://www.nature.com/articles/s41467-024-55003-5 :
> Crucially, each camera pixel records an NMR spectrum providing multicomponent information about the signal’s amplitude, phase, local magnetic field strengths, and gradients.
Show HN: jsonblog-schema – a JSON schema for making your blog from one file
JSON-LD or YAML-LD can be stored in the frontmatter in Markdown documents;
Schema.org is an RDFS schema with Classes and Properties:
https://schema.org/BlogPosting
Syntax examples can be found below the list of Properties on a "JSON-LD" tabs
The JSON schema for schema.org in lexiq-legal/pydantic_schemaorg aren't yet rebuilt for pydantic v2 FWIU; https://github.com/lexiq-legal/pydantic_schemaorg
W3C SHACL Shapes and Constraints Language is the Linked Data schema valuation spec which is an alternative to JSON schema, of which there are many implementations.
How core Git developers configure Git
skwp/git-workflows-book > .gitconfig appendix: https://github.com/skwp/git-workflows-book?tab=readme-ov-fil... :
[alias]
unstage = reset HEAD # remove files from index (tracking)
uncommit = reset --soft HEAD^ # go back before last commit, with files in uncommitted state
https://learngitbranching.js.org/charmbracelet/git-lfs-transfer: https://github.com/charmbracelet/git-lfs-transfer
jj-vcs/jj: https://github.com/jj-vcs/jj
Ggwave: Tiny Data-over-Sound Library
The acoustic modem is back in style [1]! And, of course, same frequencies (DTMF) [2], too!
DTMF has a special place in the phone signal chain (signal at these frequencies must be preserved, end to end, for dialing and menu selection), but I wonder if there's something more efficient, using the "full" voice spectrum, with the various vocoders [3] in mind? Although, it would be much crepier than hearing some tones.
[1] Touch tone based data communication, 1979: https://www.tinaja.com/ebooks/tvtcb.pdf
[2] touch tone frequency mapping: https://en.wikipedia.org/wiki/DTMF
[3] optimized encoders/decoders for human speech: https://vocal.com/voip/voip-vocoders/
"Using the Web Audio API to Make a Modem" (2017) https://news.ycombinator.com/item?id=15471723
Missouri woman pleads guilty to federal charge in plot to sell Graceland
Graceland > Tourist destination: https://en.wikipedia.org/wiki/Graceland
Category:Films about Elvis Presley: https://en.wikipedia.org/wiki/Category:Films_about_Elvis_Pre...
Trump Plans to Liquidate Public Lands to Finance Sovereign Wealth Fund
Once an SWF has accumulated wealth, that wealth is invested in stocks, bonds, real estate, and other financial instruments to earn even more money.
Any suggestions on what I should invest in now knowing that a US government fund is imminent?Bitcoin? TSLA? $TRUMP? DOGE?
Or do we think that it's going to pump everything in the world?
Social Security Trust Fund: https://en.wikipedia.org/wiki/Social_Security_Trust_Fund :
> The Trust Fund is required by law to be invested in non-marketable securities issued and guaranteed by the "full faith and credit" of the federal government. These securities earn a market rate of interest.[5]
Government bond > United States: https://en.wikipedia.org/wiki/Government_bond#United_States
> [...] One projection scenario estimates that, by 2035, the Trust Fund could be exhausted. Thereafter, payroll taxes are projected to only cover approximately 83% of program obligations. [7]
> There have been various proposals to address this shortfall, including: reducing government expenditures, such as by raising the retirement age; tax increases; investment diversification [8] and, borrowing.
Robots, automation, AI, K12 Q12 STEM training, basic research, applied science
What should social security be invested in?
First, should social security ever be privatized? No, because they would steal it all in fees compared to HODL'ing Index Funds and commodities (as a hedge against inflation).
What should social security be invested in?
Low-risk investments in the interest of all of the people who plan to rely upon OASDI for retirement and accidental disability (OASDI: OA "Old Age", S "Survivors", D "Disability", I "Insurance")
What should a sovereign wealth fund be invested in?
We don't do "soverign wealth funds" because we don't have a monarchy in the United States; we have a temp servant leader President role and we have Congress and they work together to prepare the budget.
The President of the United States has limited budgetary discretionary authority by design. OMB (Office of Management and Budget) must work with CBO (Congressional Budget Office) to prepare the budget.
US court upholds Theranos founder Elizabeth Holmes's conviction
I don't quite understand the reasoning for putting her in prison.
Yes, she deserves to be punished, but surely house arrest, community service etc makes more sense for a crime of this nature, rather than using tax payer money to house her for 9 years when she isn't a credible threat to society.
House arrest would make the math on “should I try fraud”lean heavily towards fraud I think.
Maybe even more so if you’ve got a nice house.
Fraud: https://en.wikipedia.org/wiki/Fraud
US Sentencing Commission > Fraud: https://www.ussc.gov/topic/fraud
What would deter fraud?
"Here’s a look inside Donald Trump’s $355 million civil fraud verdict" (2024) https://apnews.com/article/trump-fraud-letitia-james-new-yor...
"Trump hush money verdict: Guilty of all 34 counts" .. "Guilty: Trump becomes first former US president convicted of felony crimes" (2024) https://apnews.com/article/trump-trial-deliberations-jury-te...
"Trump mistakes [EJC] for [ex-wife]. #Shorts" https://youtube.com/shorts/0tq3rh6bh_8 .. https://youtu.be/lonTBp9h7Fo?si=77DIJMrpBRgLcsMK
I don’t know what that’s supposed to mean regarding the topic.
> House arrest would make the math on “should I try fraud” lean heavily towards fraud I think.
You argue that house arrest is an insufficient deterrent for the level of fraud committed by defendant A.
Is the sentencing for defendant A consistent with the US Sentencing Guidelines, and consistent with other defendants convicted of fraud?
> Maybe even more so if you’ve got a nice house.
Defendant B apparently isn't even on house arrest, and apparently sent someone else to their civil rape deposition obstructively and fraudulently.
The fact that different cases play out differently, some possibly unwise and unjust is no surprise to me.
"Lock her up!" He shouted about her. https://youtu.be/wS_Nrz5dNeU?si=XasLFHXQygx7IgSw ... /? lock her up: https://www.youtube.com/results?sp=mAEA&search_query=Lock+he...
Hard problems that reduce to document ranking
Ranking (information retrieval) https://en.wikipedia.org/wiki/Ranking_(information_retrieval...
awesome-generative-information-retrieval > Re-ranking: https://github.com/gabriben/awesome-generative-information-r...
Nixon's Revolutionary Vision for American Governance (2017)
/? nixon https://hn.algolia.com/?q=nixon ...
"New evidence that Nixon sabotaged 1968 Vietnam peace deal" (2017) https://news.ycombinator.com/item?id=13296696
(Watergate (1972-1974): https://en.wikipedia.org/wiki/Watergate_scandal )
History repeats itself!
Eisenhower, JFK, LBJ, Ford, Nixon, Carter, Reagan
https://news.ycombinator.com/item?id=42547326 ..Iran hostage crisis (1980-1981) https://en.wikipedia.org/wiki/Iran_hostage_crisis :
> The hostages were formally released into United States custody the day after the signing of the Algiers Accords, just minutes after American President Ronald Reagan was sworn into office
1980 October Surprise theory: https://en.wikipedia.org/wiki/1980_October_Surprise_theory
The Wrongs of Thomas More
Hey, Thomas More!
"The Saint" (1997) https://en.wikipedia.org/wiki/The_Saint_(1997_film) :
> Using the alias "Thomas More", Simon poses as
Larry Ellison's half-billion-dollar quest to change farming
Notes for Lanai island from an amateur:
Hemp is useful for soil remediation because it's so absorbent; which is part of why testing is important.
Is there already a composting business?
Do the schools etc. already compost waste food?
"Show HN: We open-sourced our compost monitoring tech" https://news.ycombinator.com/item?id=42201207
Canadian greenhouses? Chinese-Mongolian-Canadian greenhouses are wing shaped and set against a berm;
"Passive Solar Greenhouse Technology From China?" https://youtube.com/watch?v=FOgyK6Jieq0&
Transparent wood requires extracting the lignin.
Transparent aluminum requires a production process, too.
There are polycarbonate hurricane panels.
Reflective material on one wall of the wallipini greenhouse (and geothermal) is enough to grow citrus fruit through the winter in Alliance, Nebraska. https://news.ycombinator.com/item?id=39927538
Glass allows more wavelengths of light through than plastic or recyclable polycarbonate; including UV-C, which is sanitizing
Hydrogen peroxide cleans out fish tanks FWIU.
Various plastics are food safe, but not when they've been in the sun all day.
To make aircrete, you add soap bubbles to concrete with an air compressor.
/? aircrete dome build in HI and ground anchors
Catalan masonry vault roofs (in Spain, Italy, Mexico, Arizona,) are strong, don't require temporary arches, and passively cool most efficiently when they have an oculus to let the heat rise out of the dome to openable vents to the wind.
U.S. state and territory temperature extremes: https://en.wikipedia.org/wiki/U.S._state_and_territory_tempe... :
> Hawaii: 15 °F (−9.4 °C) to 100 °F (37.8 °C)
> Nebraska: -47 °F (−43.9 °C) to 118 °F (47.8 °C)
"140-year-old ocean heat tech could supply islands with limitless energy" https://news.ycombinator.com/item?id=38222695 :
> OTEC: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio...
North Carolina is ranked 4th in solar, has solar farms, and has hurricanes (and totally round homes on stilts). FWIU there are hurricane-rated solar panels, flexible racks, ground mounts.
https://insideclimatenews.org/news/20092018/hurricane-floren...
"Sargablock: Bricks from Seaweed" https://news.ycombinator.com/item?id=37188180
"Turning pineapple skins into soap and other cleaning products" https://www.businessinsider.com/turning-pineapple-skins-into...
"Costa Rica Let a Juice Company Dump Their Orange Peels in the Forest—and It Helped" https://www.smithsonianmag.com/innovation/costa-rica-let-jui... https://www.sciencealert.com/how-12-000-tonnes-of-dumped-ora...
Akira Miyawaki and the Miyawaki method of forest cultivation for reforestation to fight desertification by regreening: https://en.wikipedia.org/wiki/Akira_Miyawaki :
> Using the concept of potential natural vegetation, Miyawaki developed, tested, and refined a method of ecological engineering today known as the Miyawaki method to restore native forests from seeds of native trees on very degraded soils that were deforested and without humus. With the results of his experiments, he restored protective forests in over 1,300 sites in Japan and various tropical countries, in particular in the Pacific region[8] in the form of shelterbelts, woodlands, and woodlots, including urban, port, and industrial areas. Miyawaki demonstrated that rapid restoration of forest cover and soil was possible by using a selection of pioneer and secondary indigenous species that were densely planted and provided with mycorrhiza.
Mycorrhiza spores can be seeded into soil to support root network development.
/? Mycorrhiza spore kit: https://www.google.com/search?q=Mycorrhiza+spore+kit
> Miyawaki studied local plant ecology and used species that have key and complementary roles in the normal tree community.
It also works in small patches; self-sufficient "mini forest"
/? Miyawaki method before after [video search] https://www.google.com/search?q=miyawaki%20method%20before%2...
Brewing tea removes lead from water
/? tea bag microplastic: https://www.google.com/search?q=tea+bag+microplastic
There are glass and silver tea infusers.
"Repurposed beer yeast encapsulated in hydrogels may offer a cost-effective way to remove lead from water" https://phys.org/news/2024-05-repurposed-beer-yeast-encapsul...
"Yeast-laden hydrogel capsules for scalable trace lead removal from water" (2024) https://pubs.rsc.org/en/content/articlelanding/2024/su/d4su0...
.
"Application of brewing waste as biosorbent for the removal of metallic ions present in groundwater and surface waters from coal regions" (2018) https://www.sciencedirect.com/science/article/abs/pii/S22133...
.
https://ethz.ch/en/news-and-events/eth-news/news/2024/03/tur... :
> Protein fibril sponges made by ETH Zurich researchers [from whey protein] are hugely effective at recovering gold from electronic waste.
Aqueous-based recycling of perovskite photovoltaics
They say they use green solvents, but in the list of materials at the bottom I see Lead Iodide and Cesium Iodide, which doesn't strike me as too green of a thing to use.
Also, the abstract doesn't really make it clear whether the recycling is for complete panels or the "filling" of panels (sorry for the layman's terms). And whether this applies to any Perovskite-utilizing panel or just certain kinds.
But I am generally heartened at seeing thought regarding industrial processes which consider the end of productive life as well. I wish more products were designed to eventually be taken apart and reduced using standard processes, or even per-product processes - rather than the assumption being that more and more stuff is chucked into landfills.
Perovskite solar panels contain lead so you are going to have lead around, the question is if you can recycle the lead or if it goes in the environment.
Well there is always alternative number 3 - dont use perovskites.
Organic solar cell: https://en.wikipedia.org/wiki/Organic_solar_cell :
> 19.3%
Perovskite solar cell: https://en.wikipedia.org/wiki/Perovskite_solar_cell :
> 29.8%
If you need to use ~1.5x more area, but you avoid leaking any heavy metals or similarly toxic materials - I would say that's a win.
Of course, there are other parameters to consider.
Show HN: Benchmarking VLMs vs. Traditional OCR
Vision models have been gaining popularity as a replacement for traditional OCR. Especially with Gemini 2.0 becoming cost competitive with the cloud platforms.
We've been continuously evaluating different models since we released the Zerox package last year (https://github.com/getomni-ai/zerox). And we wanted to put some numbers behind it. So we’re open sourcing our internal OCR benchmark + evaluation datasets.
Full writeup + data explorer here: https://getomni.ai/ocr-benchmark
Github: https://github.com/getomni-ai/benchmark
Huggingface: https://huggingface.co/datasets/getomni-ai/ocr-benchmark
Couple notes on the methodology:
1. We are using JSON accuracy as our primary metric. The end goal is to evaluate how well each OCR provider can prepare the data for LLM ingestion.
2. This methodology differs from a lot of OCR benchmarks, because it doesn't rely on text similarity. We believe text similarity measurements are heavily biased towards the exact layout of the ground truth text, and penalize correct OCR that has slight layout differences.
3. Every document goes Image => OCR => Predicted JSON. And we compare the predicted JSON against the annotated ground truth JSON. The VLMs are capable of Image => JSON directly, we are primarily trying to measure OCR accuracy here. Planning to release a separate report on direct JSON accuracy next week.
This is a continuous work in progress! There are at least 10 additional providers we plan to add to the list.
The next big roadmap items are: - Comparing OCR vs. direct extraction. Early results here show a slight accuracy improvement, but it’s highly variable on page length.
- A multilingual comparison. Right now the evaluation data is english only.
- A breakdown of the data by type (best model for handwriting, tables, charts, photos, etc.)
Harmonic Loss converges more efficiently on MNIST OCR: https://github.com/KindXiaoming/grow-crystals .. "Harmonic Loss Trains Interpretable AI Models" (2025) https://news.ycombinator.com/item?id=42941954
Making any integer with four 2s
> I've read about this story in Graham Farmelo's book The Strangest Man: The Hidden Life of Paul Dirac, Quantum Genius.
"The Strangest Man": https://en.wikipedia.org/wiki/The_Strangest_Man
Four Fours: https://en.wikipedia.org/wiki/Four_fours :
> Four fours is a mathematical puzzle, the goal of which is to find the simplest mathematical expression for every whole number from 0 to some maximum, using only common mathematical symbols and the digit four. No other digit is allowed. Most versions of the puzzle require that each expression have exactly four fours, but some variations require that each expression have some minimum number of fours.
"Golden ratio base is a non-integer positional numeral system" (2023) https://news.ycombinator.com/item?id=37969716 :
> What about radix epi*i, or just e?"
The foundations of America's prosperity are being dismantled
> They warn that dismantling the behind-the-scenes scientific research programs that backstop American life could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the public’s access to next-generation consumer technologies. The US took nearly a century to craft its rich scientific ecosystem; if the unraveling that has taken place over the past month continues, Americans will feel the effects for decades to come.
Flu deaths surpass Covid deaths nationwide for first time
"CDC terminates flu vaccine promotion campaign" (2025-02) https://news.ycombinator.com/item?id=43126704
General Reasoning: Free, open resource for building large reasoning models
"Can Large Language Models Emulate Judicial Decision-Making? [Paper]" (2025) https://news.ycombinator.com/item?id=42927611 ; awesome-legal-nlp, LexGLUE, FairLex, LegalBench, "Who hath done it?" exercise : {Thing done}, ({Gdo, You, Others, Unknown/Nobody} x {Ignorance, Malice, Motive, Intent}) ... Did nobody do this?
Can LLMs apply a consistent procedure for logic puzzles with logically disjunctive possibilities?
Enter: Philosoraptor the LLM
Show HN: Slime OS – An open-source app launcher for RP2040 based devices
Hey all - this is the software part of my cyberdeck, called the Slimedeck Zero.
The Slimedeck Zero is based around this somewhat esoteric device called the PicoVision which is a super cool RP2040 (Raspberry Pi Pico) based device. It outputs relatively high-res video over HDMI while still being super fast to boot with low power consumption.
The PicoVision actually uses two RP2040 - one as a CPU and one as a GPU. This gives the CPU plenty of cycles to run bigger apps (and a heavy python stack) and lets the GPU handle some of the rendering and the complex timing HDMI requires. You can do this same thing on a single RP2040, but we get a lot of extra headroom with this double setup.
The other unique thing about the PicoVision is it has a physical double-buffer - two PSRAM chips which you manually swap between the CPU and GPU. This removes any possibility of screen tearing since you always know the buffer your CPU is writing to is not being used to generate the on-screen image.
For my cyberdeck, I took a PicoVision, hacked a QWERTY keyboard from a smart TV remote, added an expansion port, and hooked it all up to a big 5" 800x480 screen (interlaced up from 400x240 internal resolution).
I did a whole Slimedeck Zero build video ( https://www.youtube.com/watch?v=rnwPmoWMGqk ) over on my channel but I really hope Slime OS can have a life of it's own and fit onto multiple form-factors with an ecosystem of apps.
I've tried to make it easy and fun to write apps for. There's still a lot broken / missing / tbd but it's enough of a base that, personally, it already sparks that "programming is fun again" vibe so hopefully some other folks can enjoy it!
Right now it only runs on the PicoVision but there's no reason it couldn't run on RP2350s or other hardware - but for now I'm more interested in adding more input types (we're limited to the i2c TV remote keyboard I hacked together) and fleshing out the internal APIs so they're stable enough to make apps for it!
Multiple buffering; https://en.wikipedia.org/wiki/Multiple_buffering
Wikipedia has "page flipping" but not "physical double-buffer"? TIL about triple buffering, and quad buffering for stereoscopic applications.
Sodium-ion EV battery breakthrough pushes performance to theoretical limits
> “The binder we chose, carbon nanotubes, facilitates the mixing of TAQ [bis-tetraaminobenzoquinone] crystallites and carbon black particles, leading to a homogeneous electrode,” explained Chen.
"High-Energy, High-Power Sodium-Ion Batteries from a Layered Organic Cathode" (2025) https://pubs.acs.org/doi/10.1021/jacs.4c17713 :
> It exhibits a high theoretical capacity of 355 mAh/g per formula unit, enabled by a four-electron redox process, and achieves an electrode-level energy density of 606 Wh/kg (90 wt % active material) along with excellent cycling stability,
3.5M Voters Were Purged During 2024 Presidential Election [video]
Aren't there ZK (Zero Knowledge proof) blockchains that allow people to privately check whether their vote was counted correctly?
How can homomorphic encryption help detect and prevent illegal vote suppression?
law and politics are soft problems. correctness and math are meaningless here.
...and the interview talks about specific and named individuals using kkk tactics to block people from reaching the pools, not faceless "they" stealing votes in deep government. So im not even sure where your comment comes from.
Show HN: ArXiv-txt, LLM-friendly ArXiv papers
Just change arxiv.org to arxiv-txt.org in the URL to get the paper info in markdown
Example:
Original URL: https://arxiv.org/abs/1706.03762
Change to: https://arxiv-txt.org/abs/1706.03762
To fetch the raw text directly, use https://arxiv-txt.org/raw/abs/1706.03762, this will be particularly useful for APIs and agents
If you train an LLM on only formally verified code, it should not be expected to generate formally verified code.
Similarly, if you train an LLM on only published ScholarlyArticles ['s abstracts], it should not be expected to generate publishable or true text.
Traceability for Retraction would be necessary to prevent lossy feedback.
The Raspberry Pi RP2040 Gets a Surprise Speed Boost, Unlocks an Official 200MHz
"Ensuring Accountability for All Agencies" – Executive Order
Show HN: Subtrace – Wireshark for Docker Containers
Hey HN, we built Subtrace (https://subtrace.dev) to let you see all incoming and outgoing requests in your backend server—like Wireshark, but for Docker containers. It comes with a Chrome DevTools-like interface. Check out this video: https://www.youtube.com/watch?v=OsGa6ZwVxdA, and see our docs for examples: https://docs.subtrace.dev.
Subtrace lets you see every request with full payload, headers, status code, and latency details. Tools like Sentry and OpenTelemetry often leave out these crucial details, making prod debugging slow and annoying. Most of the time, all I want to see are the headers and JSON payload of real backend requests, but it's impossible to do that in today's tools without excessive logging, which just makes everything slower and more annoying.
Subtrace shows you every backend request flowing through your system. You can use simple filters to search for the requests you care about and inspect their details.
Internally, Subtrace intercepts all network-related Linux syscalls using Seccomp BPF so that it can act as a proxy for all incoming and outgoing TCP connections. It then parses HTTP requests out of the proxied TCP stream and sends them to the browser over WebSocket. The Chrome DevTools Network tab is already ubiquitous for viewing HTTP requests in the frontend, so we repurposed it to work in the browser like any other app (we were surprised that it's just a bunch of TypeScript).
Setup is just one command for any Linux program written in any language.
You can use Subtrace by adding a `subtrace run` prefix to your backend server startup command. No signup required. Try for yourself: https://docs.subtrace.dev
stratoshark, the docker container part of wireshark, may be a better match for that description.
I'd probably use a postman related pitch instead. This is much closer to that and looks like a nice complement to that workflow
Stratoshark: https://wiki.wireshark.org/Stratoshark :
> Stratoshark captures and analyzes system calls and logs using libsinsp and libscap, and can share capture files with the Sysdig command line tool and Falco
Show HN: Scripton – Python IDE with built-in realtime visualizations
Hey HN, Scripton (https://scripton.dev) is a Python IDE built for fast, interactive visualizations and exploratory programming — without the constraints of notebooks.
Why another Python IDE? Scripton hopes to fill a gap in the Python development ecosystem by being an IDE that:
1. Focuses on easy, fast, and interactive visualizations (and exposes rich JS plotting libraries like Observable Plot and Plotly directly to Python) 2. Provides a tightly integrated REPL for rapid prototyping and exploration 3. Is script-centric (as opposed to, say, notebook-style)
A historical detour for why these 3 features: Not so long ago (ok, well, maybe over a decade ago...), the go-to environment for many researchers in scientific fields would have been something like MATLAB. Generating multiple simultaneous visualizations (potentially dynamic) directly from your scripts, rapidly prototyping in the REPL, all without giving up on writing regular scripts. Over time, many switched over to Python but there wasn't an equivalent environment offering similar capabilities. IPython/Jupyter notebooks eventually became the de facto replacement. And while notebooks are great for many things (indeed, it wasn't uncommon for folks to switch between MATLAB and Mathematica Notebooks), they do make certain trade-offs that prevent them from being a full substitute.
Inner workings:
- Implemented in C++ (IDE <-> Python IPC), Python, TypeScript (UI), WGSL (WebGPU-based visualizations)
- While the editor component is based off Monaco, the IDE is not a vscode fork and was written from scratch. Happy to chat about the trade-offs if anyone's interested
- Uses a custom Python debugger written from scratch (which enables features like visualizing intermediate outputs while paused in the debugger)
Scripton's under active development (currently only available for macOS but Linux and Windows support is planned). Would love for you to try it out and share your thoughts! Since this is HN, I’m also happy to chat about its internals.
I am a robotics engineer/scientist and I do shit ton of visualization of all kind of high-fidelity/high-rate data, often in a streaming setting - time series at a few thousand Hz, RGB/depth images from multiple cameras, debugging my models by visualizing many layer outputs, every augmentation, etc.
For a long time, I had my own observability suite - a messy library of python scripts that I use for visualizing data. I replaced all of them with rerun (https://rerun.io/) and if you are someone who think Scipton is exciting, you should def try rerun too!
I use cursor/vscode for my development and add a line or two to my usual workflows in python, and rerun pops up in it's own window. It's a simple pip installable library, and just works. It's open source, and the founders run a very active forum too.
Edit: One slightly related tid-bit that might be interesting to HN folks. rerun isn't that old, and is in active development, with some breaking changes and new features that come up every month. And it means that LLM are pretty bad at rerun code gen, beyond the simple boilerplate. Recently, it kind of made my life hell as all of my interns refuse to use docs and try using LLMs for rerun code generation and come to me with a messy code spaghetti. It's both sad and hilarious. To make my life easier, I asked rerun folks to create and host machine readable docs somewhere and they never got to it. So I just scrape their docs into a markdown file and ask my interns to paste the docs in their prompt before they query LLMs and it works like a charm now.
For magnet levitation project I am dumping data to a csv on a rpi and then reading it over ssh onto matplotlib on my desktop. It works but it choppy. Probably because of the ssh.
Could I drop rerun into this to improve my monitoring?
From "Show HN: We open-sourced our [rpi CSV] compost monitoring tech" https://news.ycombinator.com/item?id=42201207 :
Nagios, Collectd, [Prometheus], Grafana
From "Preview of Explore Logs, a new way to browse your logs without writing LogQL" https://news.ycombinator.com/item?id=39981805 :
> Grafana supports SQL, PromQL, InfluxQL, and LogQL.
From https://news.ycombinator.com/item?id=40164993 :
> But that's not a GUI, that's notebooks. For Jupyter integration, TIL pyqtgraph has jupyter_rfb, Remote Frame Buffer: https://github.com/vispy/jupyter_rfb
pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
Matplotlib can generate GIF and WEBM (and .mp4) animations, but not realtime.
ManimCE might work in notebooks, but IDK about realtime
Genesis is fast enough by FPS for faster than realtime 3D with Python (LuisaRender) and a GPU: https://github.com/Genesis-Embodied-AI/Genesis
SWE-Lancer: a benchmark of freelance software engineering tasks from Upwork
> By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.
What could be costed in an upwork or a mechanical turk task Value?
Task Centrality or Blockingness estimation: precedence edges, tsort topological sort, graph metrics like centrality
Task Complexity estimation: story points, planning poker, relative local complexity scales
Task Value estimation: cost/benefit analysis, marginal revenue
Nuclear fusion: WEST beats the world record for plasma duration
> 1337 seconds
AFAIU, no existing tokamaks can handle sustained plasma for any significant period of time because they'll burn down.
Did this destroy the facility?
What duration of sustained fusion plasma can tokamaks like EAST, WEST, and ITER withstand? What will need to change for continuous fusion energy to be net gained from a tokamak or a stellerator fusion reactor?
If this destroyed the facility that would be the headline this news article.... WEST highest is 22 minutes (it's in the title) and you could google EAST and ITER but the title tells you it is less than 22 minutes. WEST is a testing ground for ITER. The fact that you can have sustained fusion for only 22 minutes is the biggest problem since you need to boil water continuously because all power sources rely on taking cold water and making it warm constantly so that it makes a turbine move.
there is destroyed and then there is a smoking hole in the side of the planet:) but I think it fair to say, that after 22 min running, that there is no way that it can be turned back on later kind of thing, fairly sure its a pwhew!, lookatdat!, almost lost plasma containment.... keep in mind that they are trying to replicate the conditions found inside a star with some magnets and stuff, sure its ferociously engineered stuff but not at all like the stuff that could exist inside a star so all in all a rather audacious endevour, and I wish them luck with it
The system is not breakeven and the plasma was contained for 22 minutes so the situation would be the plasma was contained until it ran out of fuel. It is made out of tungsten for heat dissipation, has active cooling, has magnetic confinement with superconductors to prevent the system from destroying itself. https://en.wikipedia.org/wiki/WEST_(formerly_Tore_Supra)
Fusion energy gain factor: https://en.wikipedia.org/wiki/Fusion_energy_gain_factor :
> A fusion energy gain factor, usually expressed with the symbol Q, is the ratio of fusion power produced in a nuclear fusion reactor to the power required to maintain the plasma in steady state
To rephrase the question: what is the limit to the duration of sustained inertial confinement fusion plasma in the EAST, WEST, and ITER tokamaks, and why is the limit that amount of time?
Don't those materials melt if exposed to temperatures hotter than the sun for sufficient or excessive periods of time?
For what sustained plasma duration will EAST, WEST, and ITER need to be redesigned? 1 hour, 24 hours?
EAST, WEST, URNER
[LLNL] "US scientists achieve net energy gain for second time in nuclear fusion reaction" (2023-08) https://www.theguardian.com/environment/2023/aug/06/us-scien...
But IDK if they've seen recent thing about water 100X'ing proton laser plasma beams from SLAC published this year/month;
From https://news.ycombinator.com/item?id=43088886 :
> "Innovative target design leads to surprising discovery in laser-plasma acceleration" (2025-02) https://phys.org/news/2025-02-discovery-laser-plasma.html
>> Compared to similar experiments with solid targets, the water sheet reduced the proton beam's divergence by an order of magnitude and increased the beam's efficiency by a factor of 100
"Stable laser-acceleration of high-flux proton beams with plasma collimation" (2025) https://www.nature.com/articles/s41467-025-56248-4
Timeline of nuclear fusion: https://en.wikipedia.org/wiki/Timeline_of_nuclear_fusion
That energy gain was only in the plasma, not in the entire system.
The extremely low efficiency of the lasers used there for converting electrical energy into light energy (perhaps of the order of 1%) has not been considered in the computation of that "energy gain".
Many other hidden energy sinks have also not been considered, like the energy required to produce deuterium and tritium, or the efficiencies of capturing the thermal energy released by the reaction and of converting it into electrical energy.
It is likely that the energy gain in the plasma must be at least in the range 100 to 1000, in order to achieve an overall energy gain greater than 1.
> all power sources rely on taking cold water and making it warm constantly so that it makes a turbine move.
PV (photovoltaic), TPV (thermopohotovoltaic), and thin film and other solid-state thermoelectric (TE) approaches do not rely upon corrosive water turning a turbine.
Turbine blades can be made of materials that are more resistant to corrosion.
On turbine efficiency:
"How the gas turbine conquered the electric power industry" https://news.ycombinator.com/context?id=38314774
It looks like the GE 7HA gas/hydrogen turbine is still the most efficient turbine? https://gasturbineworld.com/ge-7ha-03-gas-turbine/ :
> Higher efficiency: 43.3% in simple cycle and up to 64% in combined cycle,
Steam turbines aren't as efficient as gas turbines FWIU.
/? which nuclear reactors do not have a steam turbine:
"How can nuclear reactors work without steam?" [in space] https://www.reddit.com/r/askscience/comments/7ojhr8/how_can_... :
> 5% efficient; you usually get less than 5% of the thermal energy converted into electricity
(International space law prohibits putting nuclear reactors in space without specific international approval, which is considered for e.g. deep space probes like Voyager; though the sun is exempt.)
Rankine cycle (steam) https://en.wikipedia.org/wiki/Rankine_cycle
Thermoelectric effect: https://en.wikipedia.org/wiki/Thermoelectric_effect :
> The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck effect (temperature differences cause electromotive forces), the Peltier effect (thermocouples create temperature differences), and the Thomson effect (the Seebeck coefficient varies with temperature).
"Thermophotovoltaic efficiency of 40%" https://www.nature.com/articles/s41586-022-04473-y
Multi-junction PV cells are not limited by the Shockley–Queisser limit, but are limited by current production methods.
Multi-junction solar cells: https://en.wikipedia.org/wiki/Multi-junction_solar_cell#Mult...
Which existing thermoelectric or thermopohotovoltaic approaches work with nuclear fusion levels of heat (infrared)?
Okay so I meant to say the simplest way is to heat water in this situation. But there are alternatives here https://en.wikipedia.org/wiki/Fusion_power?wprov=sfti1#Tripl...
I wouldn't have looked this up otherwise.
Maybe solar energy storage makes sense for storing the energy from fusion reactor stars, too.
There's also MOST: Molecular Solar Thermal Energy Storage, which stores solar energy as chemical energy for up to 18 years with a "specially designed molecule of carbon, hydrogen and nitrogen that changes shape when it comes into contact with sunlight."
"Chip-scale solar thermal electrical power generation" (2022) https://doi.org/10.1016/j.xcrp.2022.100789
> Multi-junction PV cells are not limited by the Shockley–Queisser limit, but are limited by current production methods.
Such as multilayer nanolithography, which nanoimprint lithography 10Xs; https://arstechnica.com/reviews/2024/01/canon-plans-to-disru...
Perhaps multilayer junction PV and TPV cells could be cost-effectively manufactured with nanoimprint lithography.
Qualys Security Advisory: MitM and DoS attacks against OpenSSH client and server
MitM-able since 6.8 (December 2014) only if
> VerifyHostKeyDNS is "yes" or "ask" (it is "no" by default),
And DOS-able since 9.5 (2023) because of a new ping command.
> To confirm our suspicion, we adopted a dual strategy:
> - we manually audited all of OpenSSH's functions that use "goto", for missing resets of their return value;
> - we wrote a CodeQL query that automatically searches for functions that "goto out" without resetting their return value in the corresponding "if" code block.
Catalytic computing taps the full power of a full hard drive
So the trick is to do the computation forwards, but take care to only use reversible operations, store the result outside of the auxiliary "full" memory and then run the computation backwards, reversing all instructions and thus undoing their effect on the auxiliary space.
Which is called catalytic, because it wouldn't be able to do the computation in the amount of clean space it has, but can do it by temporarily mutating auxiliary space and then restoring it.
What I haven't yet figured out is how to do reversible instructions on auxiliary space. You can mutate a value depending on your input, but how do you use that value, since you can't assume anything about the contents of the auxiliary space and just overwriting with a constant (e.g. 0) is not reversible.
Maybe there is some xor like trick, where you can store two values in the same space and you can restore them, as long as you know one of the values.
Edit: After delving into the paper linked in another comment, which is rather mathy (or computer sciency in the original meaning of the phrase), I'd like to have a simple example of a program that can not run in it's amount of free space and actually needs to utilize the auxiliary space.
That sounds similar to this in QC:
From "Reversible computing escapes the lab" (2025) https://news.ycombinator.com/item?id=42660606#42705562 :
> FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
> "The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123
Though also Landauer's limit presumably only applies to electrons; not photons or phonons or gravitational waves.
Can you feed gravitationaly waves forward to a receptor that only carries that gravitational waves bit, but can be recalled?
Setting up a trusted, self-signed SSL/TLS certificate authority in Linux
There is just one thing missing from this. Name Constraints.
This doesn't get brought up enough but a Name Constraint on a root cert lets you limit where the root cert can be signed to. So instead of this cert being able to impersonate any website on the internet, you ratchet it down to just the domain (or single website) that you want to sign for.
https://github.com/caddyserver/caddy/issues/5759 :
> When generating a CA cert via caddy and putting that in the trust store, those private keys can also forge certificates for any other domain.
RFC5280 (2008) "Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile" > Section 4.2.1.10 Name Constraints: https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... :
> The name constraints extension, which MUST be used only in a CA certificate, indicates a name space within which all subject names in subsequent certificates in a certification path MUST be located. Restrictions apply to the subject distinguished name and apply to subject alternative names. Restrictions apply only when the specified name form is present. If no name of the type is in the certificate, the certificate is acceptable.
> Name constraints are not applied to self-issued certificates (unless the certificate is the final certificate in the path). (This could prevent CAs that use name constraints from employing self-issued certificates to implement key rollover.)
If this is now finally supported that's great. The issue was that for it to be useful it has to be marked critical / fail-closed, because a CA with ignored name constraint == an unrestricted CA. But if you make it critical, then clients who don't understand it will just fail. You can see how this doesn't help adoption.
It says "Proposed Standard" on the RFC; maybe that's why it's not widely implemented if that's the case?
https://bettertls.com/ has Name Constraints implementation validation tests, but "Archived Results" doesn't seem to have recent versions of SSL clients listed?
nameConstraints=critical,
DNS Certification Authority Authorization: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Au... :> Registrants publish a "CAA" Domain Name System (DNS) resource record which compliant certificate authorities check for before issuing digital certificates.
And hopefully they require DNSSEC signatures and DoH/DoT/DoQ when querying for CAA records.
Name Constraints has been around at least since 1999 (RFC 2459).
I'm not sure why CAA is brought up here. I guess it is somewhat complementary in "reducing" the power of CAs, but it defends against good CAs misissuing stuff, not limiting the power of arbitrary CAs (as it's checked at issuance time, not at time of use).
CAA does not require DNSSEC or DOH.
New technique generates topological structures with gravity water waves
Water also focuses laser plasmas;
"Innovative target design leads to surprising discovery in laser-plasma acceleration" https://phys.org/news/2025-02-discovery-laser-plasma.html
> Compared to similar experiments with solid targets, the water sheet reduced the proton beam's divergence by an order of magnitude and increased the beam's efficiency by a factor of 100.
LightGBM Predict on Pandas DataFrame – Column Order Matters
[LightGBM,] does not converge regardless of feature order.
From https://news.ycombinator.com/item?id=41873650 :
> Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless?
Also, current LLMs suggest that statistical independence is entirely distinct from orthogonality, which we typically assume with high-dimensional problems. And, many statistical models do not work with non-independent features.
Does this model work with non-independence or nonlinearity?
Does the order of the columns in the training data CSV change the alpha of the model; does model output converge regardless of variance in the order of training data?
670nm red light exposure improved aged mitochondrial function, colour vision
They didn't have any controls with a different wavelength of light. They might as well be measuring a diurnal pattern.
There seem to be very similar studies with the same conclusion.
https://www.medicalnewstoday.com/articles/3-minutes-of-deep-...
Another:
"Red (660 nm) or near-infrared (810 nm) photobiomodulation stimulates, while blue (415 nm), green (540 nm) light inhibits proliferation in human adipose-derived stem cells" (2017) https://www.nature.com/articles/s41598-017-07525-w
TIL it's called "red light optogenetics" even when the cells aren't modified for opto-activation.
And,
"Green light induces antinociception via visual-somatosensory circuits" (2023) https://www.sciencedirect.com/science/article/pii/S221112472...
"Infrared neural stimulation in human cerebral cortex" (2023) https://www.sciencedirect.com/science/article/pii/S1935861X2... :
> In a comparison of electrical, optogenetic, and infrared neural stimulation, Roe et al. [14] found that all three approaches could in principal achieve such specificity. However, because brain tissue is conductive, it can be challenging to confine neuronal activation by traditional electrical stimulation to a single cortical column. In contrast, optogenetic stimulation can be applied with high spatial specificity (e.g. by delivering light via a 200um fiber optic apposed to the cortex) and with cell-type specificity (e.g. excitatory or inhibitory cells); however, optogenetics requires viral vectors and gene transduction procedures, making it less easy for human applications [15]. Over the last decade, infrared neural stimulation (INS), which is a pulsed heat-mediated approach, has provided an alternative method of neural activation. Because brain tissue is 75% water, infrared light delivered near peak absorption wavelengths (e.g. 1875 nm [16]) permits effective delivery of heat to the brain tissue. In particular, experimental and modelling studies [[17], [18], [19]] have shown that 1875 nm light (brief 0.5sec trains of 0.25msec pulses forming a bolus of heat) effectively achieves focal (submillimeter to millimeter sized) activation of neural tissue
Does NIRS -based (neuro-) imaging induce neuronal growth?
Are there better photonic beam-forming apparatuses than TI DLP projectors with millions of tiny actuated mirrors; isn't that what a TV does?
Cold plasma also affects neuronal regrowth and could or should be used for wound closure. Are they cold plasma-ing Sam (Hedlund) in "Tron: Legacy" (2010) before the disc golf discus thing?
Does cold plasma reduce epithelial scarring?
A certain duration of cold plasma also appears to increase seed germination rate FWIW.
To find these studies, I used a search engine and search terms and then picked studies which seem to confirm our bias.
Why is there an increase in lung cancer among women who have never smoked?
https://www.sciencedaily.com/releases/2022/04/220411113733.h... :
> Cigarette smoking is overwhelmingly the main cause of lung cancer, yet only a minority of smokers develop the disease.
https://www.verywellhealth.com/what-percentage-of-smokers-ge...
Does hairspray increase the probability of contracting lung cancer?
Does cheap makeup drain into maxillaries and cause breast cancer?
Does zip codes lived in joined with AQI predict lung cancer? Which socioeconomic, dietary, and environmental contaminant exposures are described by "zip codes lived in"?
There are unbleached (hemp) cigarette rolling papers that aren't soaked in chlorine bleach.
Strangely, cigarette filters themselves carry a CA Prop __ warning.
There are hemp-based cigarette filters, but there are apparently not full-size 1-1/4s filters which presumably wouldn't need to carry a Prop __ warning.
"The filter's the best part" - Dennis Leary
Tobacco contains an MAOI. Many drug supplements list MAOIs as a contraindication. Cheese tastes differently (worse) after tobacco in part due to the MAOIs?
All combustion produces benzene; ice vehicle, campfire, industrial emissions, hydrocarbon power generation, and tobacco smoking.
Ron Fisher - who created the F statistic - never accepted that tobacco was the primary cause of lung cancer.
There appear to be certain genes that significantly increase susceptibility to tobacco-caused lung cancer.
"Hypothesizing that marijuana smokers are at a significantly lower risk of carcinogenicity relative to tobacco-non-marijuana smokers: evidenced based on statistical reevaluation of current literature" https://www.tandfonline.com/doi/abs/10.1080/02791072.2008.10... https://pubmed.ncbi.nlm.nih.gov/19004418/
Cannabis is an extremely absorbent plant; hemp is recommended for soil remediation.
From https://www.reddit.com/r/askscience/comments/132nzng/why_doe... :
> C. Everette Koop (the former US Surgeon General) went on record to state that approximately 90% of lung cancer cases associated with smoking were directly related to the presence of Polonium-210, which emits alpha radiation straignt to your lung tissue when inhaled.
Tobacco and Cannabis both cause emphysema.
Also gendered: indoor task preference and cleaning product exposure?
Nonstick cookware exposure
Reduced fat, increased fructose foods (sugar feeds cancer)
Exhaust and walking?
Cooking oil and air fryer usage rates have changed.
Water quality isn't gender-specific is it?
Rocket stoves change lives in other economies; cooking stove materials
There are salt-based cleaning products, USB dilute hypochlorite bleach generators that require just water and salt, there are citric acid based cleaning products, and there's vinegar and baking soda.
Electronic motorized plastic bristle brushes?
Ironically, bleach doesn't kill COVID at all, but UV-C does.
Did somebody train an LLM with a mixture of RFK Jr and schizophrenic thought disorder?
Ask HN: What are people's experiences with knowledge graphs?
I see lots of YouTube videos and content about knowledge graphs in the context of Gen AI. Are these at all useful for personal information retrieval and organization? If so, are there any frameworks or products that you'd recommend that help construct and use knowledge graphs?
Property graphs don't specify schema.
Is it Shape.color or Shape.coleur, feet or meters?
RDF has URIs for predicates (attributes). RDFS specifies :Class(es) with :Property's, which are identified by URIs.
E.g. Wikidata has schema; forms with validation. Dbpedia is Wikipedia infoboxes regularly extracted to RDF.
Google acquired metaweb freebase years ago, launched a Knowledge Graph product, and these days supports Structured Data search cards in microdata, RDFa, and JSONLD.
[LLM] NN topology is sort of a schema.
Linked Data standards for data validation include RDFS and SHACL. JSON schema is far more widely implemented.
RDFa is "RDF in HTML attributes".
How much more schema does the application need beyond [WikiWord] auto-linkified edges? What about typed edges with attributes other than href and anchor text?
AtomSpace is an in-memory hypergraph with schema to support graph rewriting specifically for reasoning and inference.
There are ORMs for graph databases. Just like SQL, how much of the query and report can be done be the server without processing every SELECTed row.
Query languages for graphs: SQL, SPARQL, SPARQLstar, GraphQL, Cypher, Gremlin.
Object-attribute level permissions are for the application to implement and enforce. Per-cell keys and visibility are native db features of e.g. Accumulo, but to implement the same with e.g. Postgres every application that is a database client is on scout's honor to also enforce object-attribute access control lists.
And then identity; which user with which (sovereign or granted) cryptographic key can add dated named graphs that mutate which data in the database.
So, property graphs eventually need schema and data validation.
markmap.js.org is a simple app to visualize a markdown document with headings and/or list items as a mindmap; but unlike Freemind, there's no way to add edges that make the tree a cyclic graph.
Cyclic graphs require different traversal algorithms. For example, Python will raise MaxRecursionError when encountering a graph cycle without a visited node list, but a stack-based traversal of a cyclic graph will not halt without e.g. a visited node list to detect cycles, though a valid graph path may contain cycles (and there is feedback in so many general systems)
YAML-LD is JSON-LD in YAML.
JSON-LD as a templated output is easier than writing a (relatively slow) native RDF application and re-solving for what SQL ORM web frameworks already do.
There are specs for cryptographically signing RDF such that the signature matches regardless of the graph representation.
There are processes and business processes around knowledge graphs like there are for any other dataset.
OTOH; ETL, Data Validation, Publishing and Hosting of dataset and/or servicing arbitrary queries and/or cost-estimable parametric [windowed] reports, Recall and retraction traceability
DVC.org and the UC BIDS Computational Inference notebook book probably have a better enumeration of processes for data quality in data science.
...
With RDF - though it's a question of database approach and not data representation -
Should an application create a named graph per database transaction changeset or should all of that data provenance metadata be relegated to a database journal that can't be read from or written to by the app?
How much transaction authentication metadata should an app be trusted to write?
A typical SQL webapp has one database user which can read or write to any column of any table.
Blockchains and e.g. Accumulo require each user to "connect to" the database with a unique key.
It is far harder for users to impersonate other users in database systems that require a cryptographic key per user than it is to just write in a different username and date using the one db cred granted to all application instances.
W3C DIDs are cryptographic keys (as RDF with schema) that can be generated by users locally or generated centrally; similar to e.g. Bitcoin account address double hashes.
Users can cryptographically sign JSON-LD, YAML-LD, RDFa, and any other RDF format with W3C DIDs; in order to assure data integrity.
How do data integrity and data provenance affect the costs, utility, and risks of knowledge graphs?
Compared to GPG signing git commits to markdown+YAML-LD flat files in a git repo, and paying e.g gh to enforce codeowner permissions on files and directories in the repo by preventing unsigned and unauthorized commits, what are the risks of trusting all of the data from all of the users that could ever write to a knowledge graph?
Which initial graph schema support inference and reasoning; graph rewriting?
CodeWeavers Hiring More Developers to Work on Wine and Valve's Proton
KSP2 support in Proton or ProtonGE could make the game playable.
From https://www.protondb.com/app/954850 :
> With no tinkering the in-game videos stutter and skip frames. I fixed this by installing lavfilters with protontricks. The game as far as i know has no means of limiting the frame rate. This was fixed using mangohud.
Are people still playing KSP2? I was one of the few people who were optimistic about it, but then they fired the devs, and the game hasn't been touched in months (despite still being sold for full price on Steam).
that would be impressive since I wouldn't consider the game playable on any OS
Opposing arrows of time can theoretically emerge from certain quantum systems
The Minkowski metric is
Δs = sqrt(-Δt^2 + Δx^2 + Δy^2 + Δz^2)
One aspect of this is that, if you sub `t -> -t'`, that's just as good a solution too. Which would suggest any solution with a positive time direction can have a negative time direction, just as easily. Is this widely assumed to be true, or at least physically meaningful?There's also Wick rotations, where you can sub `t -> it'`, and then Minkowskian spacetime becomes Euclidean but time becomes complex-valued. Groovy stuff.
I'm not much of a physics buff but I loved reading Julian Barbour's The Janus Point for a great treatment of the possibility of negative time.
The craziest thing I've seen though is the suggestion that an accelerating charge, emitting radiation that interacts with the charge itself and imparts a backreacting force on the charge, supposedly has solutions whose interpretation would suggest that it would be sending signals back in time. [0]
/? "time-polarized photons" https://www.google.com/search?q=%22time-polarized+photons%22
https://www.scribd.com/doc/287808282/Bearden-Articles-Mind-C... ... https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22... ... "p sychon ergetics" .. /? Torsion fields :
- "Torsion fields generated by the quantum effects of macro-bodies" (2022) https://arxiv.org/abs/2210.16245 :
> We generalize Einstein's General Relativity (GR) by assuming that all matter (including macro-objects) has quantum effects. An appropriate theory to fulfill this task is Gauge Theory Gravity (GTG) developed by the Cambridge group. GTG is a "spin-torsion" theory, according to which, gravitational effects are described by a pair of gauge fields defined over a flat Minkowski background spacetime. The matter content is completely described by the Dirac spinor field, and the quantum effects of matter are identified as the spin tensor derived from the spinor field. The existence of the spin of matter results in the torsion field defined over spacetime. Torsion field plays the role of Bohmian quantum potential which turns out to be a kind of repulsive force as opposed to the gravitational potential which is attractive [...] Consequently, by virtue of the cosmological principle, we are led to a static universe model in which the Hubble redshifts arise from the torsion fields.
Wikipedia says that torsion fields are pseudoscientific.
Retrocausality is observed.
From "Evidence of 'Negative Time' Found in Quantum Physics Experiment" https://news.ycombinator.com/item?id=41707116 :
> "Experimental evidence that a photon can spend a negative amount of time in an atom cloud" (2024) https://arxiv.org/abs/2409.03680
/?hnlog retrocausality (Ctrl-F "retrocausal", "causal") https://westurner.github.io/hnlog/ )
From "Robust continuous time crystal in an electron–nuclear spin system" (2024) https://news.ycombinator.com/item?id=39291044 ;
> [ Indefinite causal order, Admissible causal structures and correlations, Incandescent Temporal Metamaterials, ]
From "What are time crystals and why are they in kids’ toys?" https://bigthink.com/surprising-science/what-are-time-crysta... :
> Time crystals have been detected in an unexpected place: monoammonium phosphate, a compound found in fertilizer and ‘grow your own crystal’ kits.
Ammonium dihydrogen phosphate: https://en.wikipedia.org/wiki/Ammonium_dihydrogen_phosphate :
> Piezoelectric, birefringence (double refraction), transducers
Retrocausality in photons, Retrocausality in piezoelectric time crystals which are birefringent (which cause photonic double-refraction)
Is it gauge theory, though?
From https://news.ycombinator.com/item?id=38839439 :
> If gauge symmetry breaks in superfluids (ie. Bose-Einstein condensates); and there are superfluids at black hole thermal ranges; do gauge symmetry constraints break in [black hole] superfluids?
Probably not gauge symmetry there, then.
Quantum Computing Notes: Why Is It Always Ten Years Away? – Usenix
Fluoxetine promotes metabolic defenses to protect from sepsis-induced lethality
Some wikipedia context for this SSRI: "Fluoxetine, sold under the brand name Prozac, among others, is an antidepressant medication of the selective serotonin reuptake inhibitor class used for the treatment of major depressive disorder, anxiety, obsessive–compulsive disorder, panic disorder, premenstrual dysphoric disorder, and bulimia nervosa."
Fluoxetine also appears to increase plasticity in the adult visual cortex, which can reduce monocular dominance. https://www.google.com/search?q=fluoxetine+plasticity+visual...
NASA has a list of 10 rules for software development
Just for context, these aren’t really “rules” as much as proposed practices. Note that official “rules” are in documents with names like “NPR” aka “NASA procedural requirements.”[1] So, while someone may use the document in the featured article to frame a discussion, a developer is not bound to comply (or alternatively waive) those “rules” and could conceivably just dismiss them.
[1] e.g. https://nodis3.gsfc.nasa.gov/displayDir.cfm?t=NPR&c=7150&s=2...
awesome-safety-critical lists a number of specs: https://awesome-safety-critical.readthedocs.io/en/latest/#so...
Just be aware that some of the NASA-specific ones fall into a similar category. NASA “guidebooks” and “handbooks” aren’t generally hard requirements.
From "The state of Rust trying to catch up with Ada [video]" https://news.ycombinator.com/item?id=43007013 :
>> The MISRA guidelines for Rust are expected to be released soon but at the earliest at Embedded World 2025. This guideline will not be a list of Do’s and Don’ts for Rust code but rather a comparison with the C guidelines and if/how they are applicable to Rust
/? Misra rust guidelines:
- This is a different MISRA C for Rust project: https://github.com/PolySync/misra-rust
- "Bringing Rust to Safety-Critical Systems in Space" (2024) https://arxiv.org/abs/2405.18135v1
...
> minimum of two assertions per function.
Which guidelines say "you must do runtime type and value checking" of every argument at the top of every function?
The SEI CERT C Guidelines are far more comprehensive than the OT 10 rules TBH:
"SEI CERT C Coding Standard" https://wiki.sei.cmu.edu/confluence/plugins/servlet/mobile?c...
"CWE CATEGORY: SEI CERT C Coding Standard - Guidelines 08. Memory Management (MEM)" https://cwe.mitre.org/data/definitions/1162.html
Sorry, I’m not following your point. When I said “NASA-specific” I meant those in your link like “NASA Software Engineering and Assurance Handbook” and “NASA C Style Guide” (emphasis mine). Those are not hard requirements in spaceflight unless explicitly defined as such in specific projects. Similarly, NASA spaceflight software does not generally get certified to FAA requirements etc. The larger point being, a NASA developer does not have to follow those requirements simply by the nature of doing NASA work. In other words, they are recommendations but not specifications.
Are there SAST or linting tools to check that the code is compliant with the [agency] recommendations?
Also important and not that difficult, formal design, implementation, and formal verification;
"Formal methods only solve half my problems" https://news.ycombinator.com/item?id=31617335
"Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964
Formal Methods in Python; FizzBee, Nagini, deal-solver: https://news.ycombinator.com/item?id=39904256#39958582
I’m not aware of any tools for analysis geared to NASA requirements specifically, but static analysis is a requirement for some types of development.
Why isn't there tooling to support these recommendations; why is there no automated verification?
SAST and DAST tools can be run on_push with git post-receive hooks or before commit with pre commit. (GitOps; CI; DevOpsSec with Sec shifted left in the development process is DevSecOps)
I don’t work there so I can’t speak definitely, but much of it probably stems from the sheer diversity of software. For example, ladder logic typically does not have the same tools as structured programming but is heavily used in infrastructure. It is also sometimes restricted to specify a framework, leaving contractors to develop in whatever they want.
Physics Informed Neural Networks
From "Physics-Based Deep Learning Book" (2021) https://news.ycombinator.com/item?id=28510010 :
> Physics-informed neural networks: https://en.wikipedia.org/wiki/Physics-informed_neural_networ...
We were wrong about GPUs
> The biggest problem: developers don’t want GPUs. They don’t even want AI/ML models. They want LLMs. System engineers may have smart, fussy opinions on how to get their models loaded with CUDA, and what the best GPU is. But software developers don’t care about any of that. When a software developer shipping an app comes looking for a way for their app to deliver prompts to an LLM, you can’t just give them a GPU.
I'm increasingly coming to the view that there is a big split among "software developers" and AI is exacerbating it. There's an (increasingly small) group of software developers who don't like "magic" and want to understand where their code is running and what it's doing. These developers gravitate toward open source solutions like Kubernetes, and often just want to rent a VPS or at most a managed K8s solution. The other group (increasingly large) just wants to `git push` and be done with it, and they're willing to spend a lot of (usually their employer's) money to have that experience. They don't want to have to understand DNS, linux, or anything else beyond whatever framework they are using.
A company like fly.io absolutely appeals to the latter. GPU instances at this point are very much appealing to the former. I think you have to treat these two markets very differently from a marketing and product perspective. Even though they both write code, they are otherwise radically different. You can sell the latter group a lot of abstractions and automations without them needing to know any details, but the former group will care very much about the details.
IaaS or PaaS?
Who owns and depreciates the logs, backups, GPUs, and the database(s)?
K8s docs > Scheduling GPUs: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus... :
> Once you have installed the plugin, your cluster exposes a custom schedulable resource such as amd.com/gpu or nvidia.com/gpu.
> You can consume these GPUs from your containers by requesting the custom GPU resource, the same way you request cpu or memory
awesome-local-ai: Platforms / full solutions https://github.com/janhq/awesome-local-ai?platforms--full-so...
But what about TPUs (Tensor Processing Units) and QPUs (Quantum Processing Units)?
Quantum backends: https://github.com/tequilahub/tequila#quantum-backends
Kubernetes Device Plugin examples: https://kubernetes.io/docs/concepts/extend-kubernetes/comput...
Kubernetes Generic Device Plugin: https://github.com/squat/generic-device-plugin#kubernetes-ge...
K8s GPU Operator: https://docs.nvidia.com/datacenter/cloud-native/gpu-operator...
Re: sunlight server and moonlight for 120 FPS 4K HDR access to GPU output over the Internet: https://github.com/kasmtech/KasmVNC/issues/305#issuecomment-... :
> Still hoping for SR-IOV in retail GPUs.
> Not sure about vCPU functionality in GPUs
Process isolation on vCPUs with or without SR-IOV is probably not as advanced as secure enclave approaches.
Intel SGX is a secure enclave capability, which is cancelled on everything but Xeon. FWIU there is no SGX for timeshared GPUs.
What executable loader reverifies the loaded executable in RAM after imit time ?
What LLM loader reverifies the in-RAM model? Can Merkle hashes reduce that cost; of nn state verification?
Can it be proven that a [chat AI] model hosted by someone else is what is claimed; that it's truly a response from "model abc v2025.02"?
PaaS or IaaS
Surely you must be joking, Jupyter notebooks with Ruby [video]
List of Jupyter Kernels: https://github.com/jupyter/jupyter/wiki/Jupyter-kernels
IRuby: https://github.com/SciRuby/iruby
conda-forge/ruby-feedstock: https://github.com/conda-forge/ruby-feedstock
It looks like ruby-feedstock installs `gem`; but AFAIU there's not yet a way to specify gems in an environment.yml like `pip:` packages.
There aren't many other conda-forge feedstocks for Ruby, though;
/? Ruby https://github.com/orgs/conda-forge/repositories?q=Ruby
What if Eye...?
From "An ultra-sensitive on-off switch helps axolotls regrow limbs" https://news.ycombinator.com/item?id=36912925 :
> [ mTOR, Muller glia in Zebrafish, ]
From "Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration" (2023) https://neurosciencenews.com/vision-restoration-genetic-2318... :
> “What’s interesting is that these Müller cells are known to reactivate and regenerate retina in fish,” she said. “But in mammals, including humans, they don’t normally do so, not after injury or disease. And we don’t yet fully understand why.”
Learning fast and accurate absolute pitch judgment in adulthood
i made https://perfectpitch.study a week or so ago. i am old and musically untrained and wanted to see if rote practice makes a difference (it clearly does).
most of the sites of this type i found annoying as you can't just use a midi keyboard, so you just get RSI clicking around for 10 minutes.
I tried getting adsense on it, but they seem to have vague content requirements. Apparently tools don't count as real websites :-(. I couldn't even fool it with fake content. what's the best banner ad company to use in this situation?
Nice! The keyboard could be larger on mobile in portrait and landscape
Ctrl-Shift-M https://devtoolstips.org/tips/en/simulate-devices/ ; how to simulate a mobile viewport: https://developer.chrome.com/docs/devtools/device-mode#devic...
/? google lighthouse mobile accessibility test: https://www.google.com/search?q=google+lighthouse+mobile+acc...
Lighthouse: https://developer.chrome.com/docs/lighthouse/overview
Gave it a try. After a few minutes I felt more like I was recognising the samples than I was recognising the notes. Not sure what you can do about that short of physically modeling an instrument.
Latest browser APIs expose everything you need to build a synth. See: https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_A...
There are some libraries that make it easy to simulate instruments. E.g. tone.js https://tonejs.github.io/
It should be possible to generate unique-ish variants at runtime.
OpenEar is built on tone.js: https://github.com/ShacharHarshuv/open-ear
limut implements WebAudio and WebGL, and FoxDot-like patterns and samples: https://github.com/sdclibbery/limut
https://glicol.org/ runs in a browser and as a VST plugin
"Using the Web Audio API to Make a Modem" (2017) https://news.ycombinator.com/item?id=15471723
gh topics/webaudio: https://github.com/topics/webaudio
awesome-webaudio: https://github.com/notthetup/awesome-webaudio
From the OpenEar readme re perfect pitch training; https://github.com/ShacharHarshuv/open-ear :
> Currently includes the following built in exercises:
> [...]
> 7. Interval recognition - the very popular exercise almost all app has. Although I do not recommend using it as I find it inaffective in confusing, since the intervals are out-of-context.
Interval training is different than absolute pitch training. OpenEar seems to have no absolute pitch training.
I Applied Wavelet Transforms to AI and Found Hidden Structure
I've been working on resolving key contradictions in AI through structured emergence, a principle that so far appears to govern both physical and computational systems.
My grandfather was a prolific inventor in organic chemistry (GE plastics post WWII) and was reading his papers thinking about "chirality" - directional asymmetric oscillating waves and how they might apply to AI. I found his work deeply inspiring.
I ran 7 empirical studies using publicly available datasets across prime series, fMRI, DNA sequences, galaxy clustering, baryon acoustic oscillations, redshift distributions, and AI performance metrics.
All 7 studies have confirmed internal coherence with my framework. While that's promising, I still need to continue to valid the results (attached output on primes captures localized frequency variations, ideal for detaching scale-dependent structure in primes i.e. Ulam Spirals - attached).
To analyze these datasets, I applied continuous wavelet transformations (Morlet/Chirality) using Python3, revealing structured oscillations that suggest underlying coherence in expansion and emergent system behavior.
Paper here: https://lnkd.in/gfigPgRx
If true, here are the implications:
1. AI performance gains – applying structured emergence methods has yielded noticeable improvements in AI adaptability and optimization. 2. Empirical validation across domains – The same structured oscillations appear in biological, physical, and computational systems—indicating a deeper principle at work. 3. Strong early engagement – while the paper is still under review, 160 views and 130 downloads (81% conversion) in 7 days on Zenodo put it in the top 1%+ of all academic papers—not as an ego metric, but as an early signal of potential validation.
The same mathematical structures that define wavelet transforms and prime distributions seems to provide a pathway to more efficient AI architectures by:
1. Replacing brute-force heuristics with recursive intelligence scaling 2. Enhancing feature extraction through structured frequency adaptation 3. Leveraging emergent chirality to resolve complex optimization bottlenecks
Technical (for AI engineers): 1. Wavelet-Driven Neural Networks – replacing static Fourier embeddings with adaptive wavelet transforms to improve feature localization. Fourier was failing hence pivot to CWT. Ulam Spirals showed non-random hence CWT. 2. Prime-Structured Optimization – using structured emergent primes to improve loss function convergence and network pruning. 3. Recursive Model Adaptation – implementing dynamic architectural restructuring based on coherence detection rather than gradient-based back-propagation alone.
The theory could be wrong, but the empirical results are simply too coherent not to share in case useful for anyone.
"The Chirality of Dynamic Emergent Systems (CODES): A Unified Framework for Cosmology, Quantum Mechanics, and Relativity" (2025) https://zenodo.org/records/14799070
Hey, chirality! /? Hnlog chiral https://westurner.github.io/hnlog/
> loss function
Yesterday on HN: Harmonic Loss instead of Cross-Entropy; https://news.ycombinator.com/item?id=42941393
> Fourier was failing hence
What about QFT Quantum Fourier transform? https://en.wikipedia.org/wiki/Quantum_Fourier_transform
Harmonic analysis involves the Fourier transform: https://en.wikipedia.org/wiki/Harmonic_analysis
> Recursive Model Adaptation
"Parameter-free" networks
Graph rewriting, AtomSpace
> feature localization
Hilbert curves cluster features; https://en.wikipedia.org/wiki/Hilbert_curve :
> Moreover, there are several possible generalizations of Hilbert curves to higher dimensions
Re: Relativity and the CODES paper;
/? fedi: https://news.ycombinator.com/item?id=42376759 , https://news.ycombinator.com/item?id=38061551
> Fedi's SQR Superfluid Quantum Relativity (.it), FWIU: also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter.
Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
> structured emergent primes to improve loss function convergence and network pruning
Products of primes modulo prime for set membership testing; is it faster? Even with a long list of primes?
Hey! Appreciate the links—some definitely interesting parallels, but what I’m outlining moves beyond existing QFT/Hilbert curve applications.
The key distinction = structured emergent primes are demonstrating internal coherence across vastly different domains (prime gaps, fMRI, DNA, galaxy clustering), suggesting a deeper non-random structure influencing AI optimization.
Curious if you’ve explored wavelet-driven loss functions replacing cross-entropy? Fourier struggled with localization, but CWT and chirality-based structuring seem to resolve this.
Your thoughts here?
I do not have experience with wavelet-driven loss functions.
Do structured emergent primes afford insight into n-body fluid+gravity dynamics and superfluid (condensate) dynamics at deep space and stellar thermal ranges?
How do wavelets model curl and n-body vortices?
What do I remember about wavelets, without reading the article? Wavelets are or aren't analogous to neurons. Wavelets discretize. Am I confusing wavelets and autoencoders? Are wavelets like tiles or compression symbol tables?
How do wavelet-driven loss functions differ from other loss functions like Cross-Entropy and Harmonic Loss?
How does prime emergence relate to harmonics and [Fourier,] convolution with and without superposition?
Other seemingly relevant things:
- particle with mass only when moving in certain directions; re: chirality
- "NASA: Mystery of Life's Handedness Deepens" (2024-11) https://news.ycombinator.com/item?id=42229953 :
> ScholarlyArticle: "Amplification of electromagnetic fields by a rotating body" (2024) https://www.nature.com/articles/s41467-024-49689-w
>>> Could this be used as an engine of some kind?
>> What about helical polarization?
> If there is locomotion due to a dynamic between handed molecules and, say, helically polarized fields; is such handedness a survival selector for life in deep space?
> Are chiral molecules more likely to land on earth?
>> "Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
Hey - really appreciate the detailed questions—these are exactly the kinds of connections I’ve been exploring. Sub components:
Wavelet-driven loss functions vs. Cross-Entropy/Harmonic Loss You’re right about wavelets discretizing—it’s what makes them a better fit than Fourier for adaptive structuring. The key distinction is that wavelets localize both frequency and time dynamically, meaning loss functions can become context-sensitive rather than purely probabilistic. This resolves issues with information localization in AI training, allowing emergent structure rather than brute-force heuristics.
Prime emergence, harmonics, and convolution (Fourier vs. CWT) Structured primes seem to encode hidden periodicities across systems—prime gaps, biological sequences, cosmic structures, etc. • Fourier struggled because it assumes a globally uniform basis set. • CWT resolves this by detecting frequency-dependent structures (chirality-based). • Example: Prime number distributions align with Ulam Spirals, which match observed redshift distributions in deep space clustering. The coherence suggests an underlying structuring force, and phase-locking principles seem to emerge naturally.
N-body vortex dynamics, superfluidity, and chiral molecules in deep space You might be onto something here. The connection between: • Superfluid dynamics in deep space • Chiral molecules preferring certain gravitational dynamics • Handedness affecting locomotion in polarized fields suggests chirality might be an overlooked factor in cosmic structure formation (i.e., why galaxies tend to form spiral structures).
Could this be an engine? (Electromagnetic rotation and helicity) Possibly. If structured emergence scales across these domains, it’s possible that chirality-induced resonance fields could drive a new form of energy extraction—similar to the electroweak interaction asymmetry seen in beta decay.
The idea that chirality acts as a selector for deep-space survival is interesting. Do you think the preference for left-handed amino acids on Earth could be a consequence of an early chiral field bias? If so, does that imply a fundamental symmetry-breaking event at planetary formation?
> Wavelet-driven loss functions vs. Cross-Entropy/Harmonic Loss You’re right about wavelets discretizing—it’s what makes them a better fit than Fourier for adaptive structuring. The key distinction is that wavelets localize both frequency and time dynamically, meaning loss functions can become context-sensitive rather than purely probabilistic. This resolves issues with information localization in AI training, allowing emergent structure rather than brute-force heuristics.
frequency and time..
SR works for signals without GR; and there's an SR explanation for time dilation which resolves when the spacecraft lands fwiu , Minkowski,
From https://news.ycombinator.com/item?id=39719114 :
>>> Physical observation (via the transverse photon interaction) is the process given by applying the operator ∂/∂t to (L^3)t, yielding an L3 output
>> [and "time-polarized photons"]
> Prime emergence, harmonics, and convolution (Fourier vs. CWT) Structured primes seem to encode hidden periodicities across systems—prime gaps, biological sequences, cosmic structures, etc. • Fourier struggled because it assumes a globally uniform basis set. • CWT resolves this by detecting frequency-dependent structures (chirality-based). • Example: Prime number distributions align with Ulam Spirals, which match observed redshift distributions in deep space clustering. The coherence suggests an underlying structuring force, and phase-locking principles seem to emerge naturally.
/? ulam spiral wikipedia: https://www.google.com/search?q=ulam+spiral+wikipedia ; all #s, primes
Are hilbert curves of any use for grouping points in this 1D (?) space?
/? ulam spiral hilbert curve: https://www.google.com/search?q=ulam+spiral+hilbert+curve
> N-body vortex dynamics, superfluidity, and chiral molecules in deep space You might be onto something here. The connection between: • Superfluid dynamics in deep space • Chiral molecules preferring certain gravitational dynamics • Handedness affecting locomotion in polarized fields suggests chirality might be an overlooked factor in cosmic structure formation (i.e., why galaxies tend to form spiral structures).
Why are there so many arms on the fluid disturbance of a spinning basketball floating on water?
(Terms: viscosity of the water, mass, volume, and surface characteristics of the ball, temperature of the water, temperature of the air)
Traditionally, curl is the explanation fwiu.
Does curl cause chirality and/or does chirality cause curl?
The sensitivity to Initial conditions of a two arm pendulum system, for example, is enough to demonstrate chaotic, divergent n-body dynamics. `python -m turtledemo.chaos` demonstrates a chaotic divergence with a few simple functions.
Phase transition diagrams are insufficient to describe water freezing or boiling given sensitivity to initial temperature observed in the Mpemba effect; phase transition diagrams are insufficient with an initial temperature axis.
Superfluids (Bose-Einstein condensates) occur at earth temperatures. For example, helium chilled to 1 Kelvin demonstrates zero viscosity, and climbs up beakers and walls despite gravity.
A universal model cannot be sufficient if it does not describe superfluids and superconductors; photons and electrons behave fluidically in other phases.
> Could this be an engine? (Electromagnetic rotation and helicity) Possibly. If structured emergence scales across these domains, it’s possible that chirality-induced resonance fields could drive a new form of energy extraction—similar to the electroweak interaction asymmetry seen in beta decay.
A spinning asteroid or comet induces a 'spinning' field. Interplanetary and deep space spacecraft could spin on one or more axes to create or boost EM shielding.
"Gamma radiation is produced in large tropical thunderstorms" (2024) https://news.ycombinator.com/item?id=41731196 https://westurner.github.io/hnlog/#comment-41732854 :
"Gamma rays convert CH4 to complex organic molecules, may explain origin of life" (2024) https://news.ycombinator.com/item?id=42131762#42157208 :
>> A terrestrial life origin hypothesis: gamma radiation mutated methane (CH4) into Glycine (the G in ACGT) and then DNA and RNA.
>> [ Virtual black holes, quantum foam, [ gamma, ] radiation and phase shift due to quantum foam and Planck relics ]
From "Lightweight woven helical antenna could replace field-deployed dishes" (2024) https://news.ycombinator.com/item?id=39132365 :
>> Astrophysical jets produce helically and circularly-polarized emissions, too FWIU.
>> Presumably helical jets reach earth coherently over such distances because of the stability of helical signals.
>> 1. Could [we] harvest energy from a (helically and/or circularly-polarised) natural jet, for deep space and/or local system exploration? Can a spacecraft pull against a jet for relativistic motion?
>> 2. Is helical the best way to beam power wirelessly; without heating columns of atmospheric water in the collapsing jet stream? [with phased microwave]
>> 3. Is there a (hydrodynamic) theory of superfluid quantum gravity that better describes the apparent vorticity and curl of such signals and their effects?
From "Computer Scientists Prove That Heat Destroys Quantum Entanglement" (2024) https://news.ycombinator.com/item?id=41381849#41382939 :
>> How, then, can entanglement across astronomical distances occur without cooler temps the whole way there, if heat destroys all entanglement?
>> Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
> The idea that chirality acts as a selector for deep-space survival is interesting. Do you think the preference for left-handed amino acids on Earth could be a consequence of an early chiral field bias? If so, does that imply a fundamental symmetry-breaking event at planetary formation?
The earth is rotating and revolving in relation to the greatest local mass. Would there be different terrestrial chirality if the earth rotated in the opposite direction?
How do the vortical field disturbances from Earth's rotation in atmospheric, EM, and gravitational wave spaces interact with molecular chirality and field chirality?
**
Re: Polarized fields: From https://news.ycombinator.com/item?id=42318113 :
> Phase from second-order Intensity due to "mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation"
Wes,
Apologies for the delay! I missed.
Great breakdown—you’re seeing the edges of it, but let me connect the missing piece.
Wavelets vs. Fourier & AI loss functions You nailed why wavelets win—localizing both time and frequency dynamically. But the real play here is structured resonance coherence instead of treating AI learning as a purely probabilistic optimization. Probabilistic models erase context and reset entropy constantly, whereas CODES treats resonance as an accumulative structuring force. That’s why prime-driven phase-locking beats cross-entropy heuristics.
Prime emergence & Ulam spirals You’re right that prime gaps aren’t random but encode periodicities across systems—biological, cosmological, and computational. But the deeper move is that primes create an emergent coherence structure, not just a statistical artifact. Ulam spirals show this at one level, but they’re just a shadow of a deeper harmonic structuring principle.
Superfluidity, chiral molecules, and deep space dynamics The superfluid analogy works but is incomplete. Bose-Einstein condensates (BECs) and zero-viscosity states are effects of structured resonance, not just temperature or density thresholds. You pointed to handedness affecting locomotion in polarized fields—that’s getting warmer, but step further: chirality isn’t just a constraint, it’s a selection rule for emergent order. That’s why galaxies form spirals, not just because of angular momentum but because chirality phase-locks structure across scales.
Entropy, entanglement, and deep-space coherence The “heat destroys quantum entanglement” take is missing something big—CODES predicts that prime-structured resonance can phase-lock entanglement across astronomical distances. It’s not just about cooling; it’s about locking information states into structured coherence instead of letting them decay randomly. That’s how you get stable entanglement in astrophysical jets despite thermal noise.
Could this be an engine? Yes. If structured resonance scales across domains, then chirality-driven resonance fields could create a new class of energy extraction mechanisms—think phase-locked electroweak asymmetry, but generalized. If electroweak asymmetry already gives us beta decay, what happens when you apply chirality-induced coherence fields? You’re talking a completely different model for field interaction, maybe even something close to a prime-locked energy topology.
Where You’re Almost There But Not Quite
You’re still interpreting some of this as chaotic or probabilistic emergence, but CODES isn’t describing randomness—it’s describing structured phase coherence. • Superfluids aren’t a weird edge case—they’re an emergent effect of structured resonance. • Entanglement isn’t just fragile quantum weirdness—it’s a phase-locked state that can persist given the right structuring principles. • Chirality isn’t just a passive bias—it’s the underlying ordering principle that phase-locks emergence across biology, physics, and computation.
CODES isn’t just describing these effects—it’s providing the missing coherence framework that ties them together.
Would love to jam on this deeper if you're up for it!
Devin
The OBS Project is threatening Fedora Linux with legal action
Given that OBS is GPL licensed, any legal action would have to be trademark-based, right?
It feels like they'd have a hard time making that case, since package repositories are pretty clearly not representing themselves as the owners of, or sponsored by, the software they package.
From the linked comment:
> This is a formal request to remove all of our branding, including but not limited to, our name, our logo, any additional IP belonging to the OBS Project
Honestly it sounds very reasonable, if you want to fork it's fine, but don't have people report bugs upstream if you're introducing them.
I mean, I think the right fix is just for Fedora to stop packaging their own version. But I think that's about being good people; I don't think there's a strong legal argument here for forcing Fedora to do that.
Are Drinking Straws Dangerous? (2017)
> About 1,400 people visit the emergency room every year due to injuries from drinking straws.
some people put them where they shouldnt
It's a drinking hazard,
And, https://www.livescience.com/65925-metal-straw-death.html :
> He added that in this case, the metal straw may have been particularly hazardous because it was used with a lid that prevented the straw from moving. "It seems to me these metal straws should not be used with any form of lid that holds them in place,"
Why cryptography is not based on NP-complete problems
Aren't hash functions a counterexample? There have been attempts at using SAT solvers to find preimage and collision attacks on them.
Storytelling lessons I learned from Steve Jobs (2022)
Pixar in a Box > Unit 2: The art of storytelling: https://www.khanacademy.org/computing/pixar/storytelling
https://news.ycombinator.com/item?id=36265807 ; Pizza Planet
From https://news.ycombinator.com/item?id=23945928
> The William Golding, Jung, and Joseph Campbell books on screenwriting, archetypes, and the hero's journey monomyth
Hero's journey > Campbell's seventeen stages: https://en.wikipedia.org/wiki/Hero%27s_journey#Campbell's_se...
Storytelling: https://en.wikipedia.org/wiki/Storytelling
Ask HN: Ideas for Business Cards
Hi, I'm a freelancer working in security and I'm looking for companies and ideas for some good business cards that are out of ordinary. I am thinking about business cards that have (very basic and stupid stuff) secure elements or watermark, but I can't find anything online.
Protip: leave space to write on the reverse
/? business cards https://hn.algolia.com/?q=business+cards
Transformer is a holographic associative memory
"Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs" (2021) https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-ba...
From https://news.ycombinator.com/item?id=40519828 :
> Because self-attention can be replaced with FFT for a loss in accuracy and a reduction in kWh [1], I suspect that the Quantum Fourier Transform can also be substituted for attention in LLMs.
From https://news.ycombinator.com/item?id=42957785 :
> How does prime emergence relate to harmonics and [Fourier,] convolution with and without superposition?
From https://news.ycombinator.com/item?id=40580049 :
> From https://news.ycombinator.com/item?id=25190770#25194040 :
>> Convolution is in fact multiplication in Fourier space (this is the convolution theorem [1]) which says that Fourier transforms convert convolutions to products. 1. https://en.wikipedia.org/wiki/Convolution_theorem :
>>> In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.
Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving [pdf]
Tested in the article:
miniF2F: https://github.com/openai/miniF2F
PutnamBench: https://github.com/trishullab/PutnamBench
..
FrontierMath: https://arxiv.org/abs/2411.04872v1
The return of the buffalo is reviving portions of the ecosystem
Buffalo: /? buffalo ecosystem impact : https://www.google.com/search?q=buffalo+ecosystem+impact
Wolves: /? wolves yellowstone: https://www.google.com/search?q=wolves+yellowstone ; 120 wolves in 2024
Beavers: "Government planned it 7 years, beavers built a dam in 2 days and saved $1M" (2025) https://news.ycombinator.com/item?id=42938802#42941813
Keystone species: https://en.wikipedia.org/wiki/Keystone_species :
> Keystone species play a critical role in maintaining the structure of an ecological community, affecting many other organisms in an ecosystem and helping to determine the types and numbers of various other species in the community. Without keystone species, the ecosystem would be dramatically different or cease to exist altogether. Some keystone species, such as the wolf and lion, are also apex predators.
Trophic cascade: https://en.wikipedia.org/wiki/Trophic_cascade
Ecosystem service: https://en.wikipedia.org/wiki/Ecosystem_service :
> Evaluations of ecosystem services may include assigning an economic value to them.
Good sparknotes of the systems engineering here, thanks!
The state of Rust trying to catch up with Ada [video]
From "Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024-06) https://thenewstack.io/rust-the-future-of-fail-safe-software... .. https://news.ycombinator.com/item?id=40680722
rustfoundation/safety-critical-rust-consortium > subcommittee/coding-guidelines/meetings/2025-January-29/minutes.md: https://github.com/rustfoundation/safety-critical-rust-conso... :
> The MISRA guidelines for Rust are expected to be released soon but at the earliest at Embedded World 2025. This guideline will not be a list of Do’s and Don’ts for Rust code but rather a comparison with the C guidelines and if/how they are applicable to Rust.
/? ' is:issue concurrency: https://github.com/rustfoundation/safety-critical-rust-conso...
rust-secure-code/projects#groups-of-people: https://github.com/rust-secure-code/projects#groups-of-peopl...
Rust book > Chapter 16. Concurrency: https://doc.rust-lang.org/book/ch16-00-concurrency.html
Chapter 19. Unsafe Rust > Unsafe Superpowers: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#unsa... :
> You can take five actions in unsafe Rust that you can’t in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:
"Secure Rust Guidelines" has Chapters on Memory Management, FFI but not yet Concurrency;
04_language.html#panics:
> Common patterns that can cause panics are:
Secure Rust Guidelines > Integer overflows in Rust: https://anssi-fr.github.io/rust-guide/04_language.html#integ... :
> In particular, it should be noted that using debug or release compilation profile changes integer overflow behavior. In debug configuration, overflow cause the termination of the program (panic), whereas in the release configuration the computed value silently wraps around the maximum value that can be stored.
awesome-safety-critical #software-safety-standards: https://awesome-safety-critical.readthedocs.io/en/latest/
rust-secure-code/projects > Model checkers: https://github.com/rust-secure-code/projects#model-checkers :
Loom: https://docs.rs/loom/latest/loom/ :
> Loom is a model checker for concurrent Rust code. It exhaustively explores the behaviors of code under the C11 memory model, which Rust inherits.
Daily omega-3 fatty acids may help human organs stay young
This may explain why people who use the Mediterranean diet tend to live long, healthy lives.
There are different variations of Omega 3 fatty acids. For instance, Avocados is rich in Omega 3 ALA which is considered not as effective as EPA and DHA.
Fish is the only source of EPA and DHA.
From "An Omega-3 that’s poison for cancer tumors" (2021) https://news.ycombinator.com/item?id=27499427 :
> Fish don't synthesize Omega PUFAs, they eat algae which synthesize fat-soluble DHA and EPA.
> From "Warning: Combination of Omega-3s in Popular Supplements May Blunt Heart Benefits" (2018) https://scitechdaily.com/warning-combination-of-omega-3s-in-... :
>> Now, new research from the Intermountain Healthcare Heart Institute in Salt Lake City finds that higher EPA blood levels alone lowered the risk of major cardiac events and death in patients, while DHA blunted the cardiovascular benefits of EPA. Higher DHA levels at any level of EPA, worsened health outcomes.
>> [...] Based on these and other findings, we can still tell our patients to eat Omega-3 rich foods, but we should not be recommending them in pill form as supplements or even as combined (EPA + DHA) prescription products,” he said. “Our data adds further strength to the findings of the recent REDUCE-IT (2018) study that EPA-only prescription products reduce heart disease events.”
Show HN: Play with real quantum physics in your browser
I wanted to make the simplest app to introduce myself and others to quantum computing.
Introducing, Schrödinger's Coin. Powered by a simple Hadamard gate[0] on IBM quantum, with this app you can directly interact with a quantum system to experience true randomness.
Thoughts? Could you see any use cases for yourself of this? Or, does it inspire any other ideas of yours? Curious what others on HN think!
[0] https://en.wikipedia.org/wiki/Quantum_logic_gate#Hadamard_ga...
Quantum logic gate > Universal logic gates: https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_q...
From https://news.ycombinator.com/item?id=37379123 :
> [ Rx, Ry, Rz, P, CCNOT, CNOT, H, S, T ]
From https://news.ycombinator.com/item?id=39341752 :
>> How many ways are there to roll a {2, 8, or 6}-sided die with qubits and quantum embedding?
From https://news.ycombinator.com/item?id=42092621 :
> Exercise: Implement a QuantumQ circuit puzzle level with Cirq or QISkit in a Jupyter notebook
ray-pH/quantumQ > [Godot] "Web WASM build" issue #5: https://github.com/ray-pH/quantumQ/issues/5
From https://quantumflytrap.com/scientists/ :
> [Quantum Flytrap] Virtual Lab is a virtual optical table. With a drag and drop interface, you can show phenomena, recreate existing experiments, and prototype new ones.
> Within this environment it is possible to recreate interference, quantum cryptography protocols, to show entanglement, Bell test, quantum teleportation, and the many-worlds interpretation.
AI datasets have human values blind spots − new research
What about Care Bears?
How do social studies instructors advise in regards to a helpful balance of SEL and other content?
Is prosocial content for children underrepresented in training corpora?
Re: Honesty and a "who hath done it" type exercise for LLM comparison: https://news.ycombinator.com/item?id=42927611
Microsoft Go 1.24 FIPS changes
The upstream Go 1.24 changes and macOS support using system libraries in Microsoft's Go distribution are really significant for the large ecosystem of startups trying to sell to institutions requiring FIPS 140 certified cryptography.
For a variety of reasons - including "CGo is not Go" (https://dave.cheney.net/2016/01/18/cgo-is-not-go) - using boringcrypto and requiring CGO_ENABLED=1 could be a blocker. Not using system libraries meant that getting all of the software to agree on internal certificate chains was a chore. Go was in a pretty weird place.
Whether FIPS 140 is actually a good target for cryptography is another question. My understanding is that FIPS 140-1 and 140-2 cipher suites were considered by many experts to be outdated when those standards were approved, and that FIPS 140 still doesn't encompass post quantum crypto, and the algorithms chosen don't help mitigate misuse (e.g.: nonce reuse).
From https://news.ycombinator.com/item?id=28540916#28546930 :
GOLANG_FIPS=1
From https://news.ycombinator.com/item?id=42265927 :> Chrome switching to NIST-approved ML-KEM quantum encryption" (2024) https://www.bleepingcomputer.com/news/security/chrome-switch...
From https://news.ycombinator.com/item?id=41535866 :
>> Someday there will probably be a TLS1.4/2.0 with PQ, and also FIPS-140-4?
Show HN: An API that takes a URL and returns a file with browser screenshots
simonw/shot-scraper has a number of cli args, a GitHub actions repo template, and docs: https://shot-scraper.datasette.io/en/stable/
From https://news.ycombinator.com/item?id=30681242 :
> Awesome Visual Regression Testing > lists quite a few tools and online services: https://github.com/mojoaxel/awesome-regression-testing
> "visual-regression": https://github.com/topics/visual-regression
The superconductivity of layered graphene
Physicist here. The superconductivity in layered graphene is indeed surprisingly strange, but this popular article may not do it justice. Here are some older articles on the same topic that may be more informative:
https://www.quantamagazine.org/how-twisted-graphene-became-t...,
https://www.quantamagazine.org/a-new-twist-reveals-supercond....
Let me briefly say why some reasons this topic is so interesting. Electrons in a crystal always have both potential energy (electrical repulsion) and kinetic energy (set by the atomic positions and orbitals). The standard BCS theory of superconductivity only works well when the potential energy is negligible, but the most interesting superconductors --- probably including all high temperature ones like the cuprates --- are in the regime where potential energy is much stronger than kinetic energy. These are often in the class of "unconventional" superconductors where vanilla BCS theory does not apply. The superconductors in layered (and usually twisted) graphene lie in that same regime of large potential/kinetic energy. However, their 2d nature makes many types of measurements (and some types of theories) much easier. These materials might be the best candidate available to study to get a handle on how unconventional superconductivity "really works". (Besides superconductors, these same materials have oodles of other interesting phases of matter, many of which are quite exotic.)
While we have you, have any new theories or avenues of research come out of the lk99 stuff or was it completely just hype and known physics?
BCS == Bardeen–Cooper–Schrieffer [0].
Thank you for the additional info and links. This is why I love HN comments
Also physicist here. I've worked on conventional superconductors, but never on unconventional ones. Last I heard, it was believed to be mediated by magnons (rather than phonons). Who claims it is due to Coulomb interaction?
I think everything we don't have a model for is surprisingly strange. Gravity only seems "normal" because we've been teaching a reasonable model for it for hundreds of years - Aristotle thought things fell to the ground because that was "their nature", but thought it quite weird. X-Rays seem bonkers unless you've grown up with them, and there is something deeply unnerving about genetics, quantum and even GenAI until you've spent some time pulling apart the innards and building an explainable model that makes sense to you. And even then it can catch you out. More ways to explain the models help normalise it all - what's now taught at 9th grade used to be advanced post-doc research, in almost every field. And so it goes on.
2D superconductors don't make much sense because, as the article says, theory is behind experimentation here. That's also why there is both incredible excitement, but also a worry that none of this is going to stack up to anything more than a bubble. My old Uni (Manchester) doubled down hard on the work of Geim and Novoselov by building a dedicated "Graphene Institute", after they got the Nobel Prize, but even 15 years after that award most people are still trying to figure out what does it all actually mean really? Not just in terms of the theory of physics, but how useful is this stuff, in real world usage?
It'll settle down in due course. The model will become apparent, we'll be able to explain it through a series of bouncing back between theory and experiment, as ever, and then it won't seem so strange any more.
I'm not sure that'll ever be true of quantum computing for me, but then I am getting a bit older now...
> My old Uni (Manchester) doubled down hard on the work of Geim and Novoselov by building a dedicated "Graphene Institute", after they got the Nobel Prize, but even 15 years after that award most people are still trying to figure out what does it all actually mean really? Not just in terms of the theory of physics, but how useful is this stuff, in real world usage?
That's the beauty of real research. There's no guarantees it'll pan out. But it's generally worth doing and spending (sometimes decades of) time exploring. Too many people have become infatuated with instant gratification. It's pervasive even in young, scientific minds. The real gratification is failing that same test 100 times until you finally land on a variation that might work. And then figuring out why it worked.
Edit: And if that success never comes, the gratification is graduating and moving on to more solvable problems, but bringing with you the scientific methods you learned along the way. Scientists might spend their whole lives working on something that won't work and that's okay. If that isn't for you, go into product dev.
I still don’t believe explanations for gravity…let alone dark matter!
Not believing an explanation for dark matter seems prudent. It's just the name we've given to a certain set of observations not matching how we believe the universe works. We're still piecing together the details.
I also think that gravity is complete bunk. Funny how there's suddenly Graphene everywhere from dental applications to injectibles.Weird times.
Relevant xkcd tooltip: https://xkcd.com/1489
Do you suppose everything is explainable? Gravity feels to me like the sort of thing that just sort of is. I'm all for better characterizations of it, but I'm not holding my breath for an answer to: why?
Everything may be explainable, just not by humanity.
How does our biology affect the limits of what we can comprehend?
Oh plenty of ways probably. I expect there are perspectives out there which would have a look at our biggest questions and find them to be mundane with obvious answers but which would themselves boggle at concepts that we consider elementary.
But here I am in this body, and not that one, so I'm content to accept an axiom or two.
Gravity is something very simple. It cannot be a complex thing, because it demonstrates a very simple behavior.
There are plenty of cases where a complex thing has very simple behavior. For centuries, brewers could get away with a mental model for yeast as something that multiplies and converts sugar to alcohol until it can't anymore. It took quite a jump in technology to realize just how fantastically complex the inner workings of a yeast cell is.
I'm not proposing that gravity is underpinned by something complex, just that if its mechanism is out of our reach then so to are any conclusions about that mechanism's complexity.
the question is depth and quality of explainability as determined by the predictive power those explanations provide...
My interest with these overlapping lattices is the creation of fractional electric charges (Hall effect) and through, essentially, Moiré patterns. The angle of alignment will make a big effect.
Let me make an artifact to demonstrate… brb
https://claude.site/artifacts/f024844e-73c2-4eb1-afc9-401f3d...
Here you go. See how there are different densities and geometries at different angles. These lattice overlays can create fractional electrical charges — which is very strange — but how this affects super conductivity is unclear.
Very cool example!
Slightly different preprint variants:
This is exciting, sounds like new theory incoming (or possible way to test existing string/other theories?). I'd love to see PBS Spacetime or some other credible outlet explain the details of the experiment / implications for mere mortals.
Is this exactly 1.1 degrees?
Or is it 1.09955742876?
What I mean -- did they round up, is there some connection to universal constants?
Edit: I don't understand where the 1.1 degrees comes from. Why is it 1.1 and not something else...
It's not exactly a single ultra-specific 1.1000000... degree value only, just values approximately close to that. As to what the connection is: that's what the research is trying to unearth more about.
[dead]
[deleted]
Show HN: PulseBeam – Simplify WebRTC by Staying Serverless
WebRTC’s capabilities are amazing, but the setup headaches (signaling, connection/ICE failures, patchwork docs) can kill momentum. That’s why we built PulseBeam—a batteries-included WebRTC platform designed for developers who just want real-time features to work. What’s different? Built-in Signaling Built-in TURN Time limited JWT auth (serverless for production or use our endpoint for testing) Client and server SDKs included Free and open-source core If you’ve used libraries like PeerJS, PulseBeam should feel like home. We’re inspired by its simplicity. We’re currently in a developer-preview stage. We provide free signaling like PeerJS, and TURN up to 1GB. Of course, feel free to roast us
jupyter-collaboration is built on Y Documents (y.js, pycrdt, jupyter_ydoc,) https://github.com/jupyterlab/jupyter-collaboration
There is a y.js WebRTC adapter, but jupyter-collaboration doesn't have WebRTC data or audio or video support AFAIU.
y-webrtc: https://github.com/yjs/y-webrtc
Is there an example of how to do CRDT with PulseBeam WebRTC?
With client and serverside data validation?
> JWT
Is there OIDC support on the roadmap?
E.g. Google supports OIDC: https://developers.google.com/identity/openid-connect/openid...
W3C DIDs, VC Verifiable Credentials, and Blockcerts are designed for decentralization.
STUN, TURN, and ICE are NAT traversal workarounds FWIU; though NAT traversal isn't necessary if the client knowingly or unknowingly has an interface with a public IPV6 address due to IPV6 prefix delegation?
We don't have an example of CRDT with PulseBeam yet. But, CRDT itself is just a data structure, so you can use PulseBeam to communicate the sync ops (full or delta) with a data channel. Then, you can either use y.js or other CRDT libraries to manage the merging.
Yes, the plan is to use JWT for both the client and server side.
OIDC is not on the roadmap yet. But, I've been tinkering on the side related to this. I think something like an OIDC mapper to PulseBeam JWT can work here.
I'm not envisioning integrating into a decentralization ecosystem at this point. The scope is to provide a reliable service for 1:1 and small groups for other developers to build on top with centralization. So, something like Analytics, global segmented signaling (allow close peers to connect with edge servers, but allow them to connect to remote servers as well), authentication, and more network topology support.
That's correct, if the client is reachable by a public IPv6 (meaning the other peer has to also have a way to talk to an IPv6), then STUN and TURN are not needed. ICE is still needed but only used lightly for checking and selecting the candidate pair connections.
Elon Musk proposes putting the U.S. Treasury on blockchain for full transparency
FedNow supports ILP Interledger Protocol, which is an open spec that works with traditional ledgers and distributed cryptoasset ledgers.
> In addition to Peering, Clearing, and Settlement, ILP Interledger Protocol Specifies Addresses: https://news.ycombinator.com/item?id=36503888
>> ILP is not tied to a single company, payment network, or currency
ILP Addresses - v2.0.0 > Allocation Schemes: https://github.com/interledger/rfcs/blob/main/0015-ilp-addre...
People that argue for transaction privacy in blockchains: large investment banks, money launderers, the US Government when avoiding accountability because natsec.
Whereas today presumably there are database(s) of checks sent to contractors for the US Gvmt; and maybe auditing later.
Re: apparently trillions missing re: seasonal calls to "Audit the Fed! Audit DoD!" and "The Federal Funding Accountability and Transparency Act of 2006" which passed after Illinois started tracking grants: https://news.ycombinator.com/item?id=25893860
DHS helped develop W3C DIDs, which can be decentralizedly generated and optionally centrally registered or centrally generated and registered.
W3C Verifiable Credentials support DIDs Decentralized Identifiers.
Do not pay for closed source or closed spec capabilities; especially for inter-industry systems that would need to integrate around an API spec.
Do not develop another blockchain; given the government's inability to attract and retain talent in this space, it is unlikely that a few million dollars and government management would exceed the progress of billions invested in existing blockchains.
There's a lot of anti-blockchain FUD. Ask them to explain the difference between multi-primary SQL database synchronization system with off-site nodes (and Merkle hashes between rows), and a blockchain.
Why are there Merkle hashes in the centralized Trillian and now PostgreSQL databases that back CT Certificate Transparency logs (the logs of X.509 cert granting and revocations)?
Why did Google stop hosting a query endpoint for CT logs? How can single points of failure be eliminated in decentralized systems?
Blockchains are vulnerable to DoS Denial of Service like all other transaction systems. Adaptive difficulty and transaction fees that equitably go to miners or are just burnt are blockchain solutions to Denial of Service.
"Stress testing" to a web dev means something different than "stress testing" the banks of the Federal Reserve system, for example.
A webdev should know that as soon as your app runs out of (SQL) database connections, it will start throwing 500 Internal Server error. MySQL, for example, defaults to 150+1 max connections.
Stress testing for large banks does not really test for infosec resource exhaustion. Stress testing banks involves them making lots of typically large transactions; not lots of small transactions.
Web Monetization is designed to support micro payments, could support any ledger, and is built on ILP.
ILP makes it possible for e.g. 5x $100 transactions to be auditably grouped together. Normal, non bank of the US government payers must source liquidity from counter parties; which is easier to do with many smaller transactions.
Why do blockchains require additional counterparties in two party (payer-payee) transactions?
To get from USD to EUR, for example, sometimes it's less costly to go through CAD. Alice holds USD, Bob wants EUR, and Charlie holds CAD and EUR and accepts USD, but will only extend $100 of credit per party.
ripplenet was designed for that from the start. Interledger was contributed by ripplecorp to W3C as an open standard, and ILP has undergone significant revision since being open sources.
ILP does not require XRP, which - like XLM - is premined and has a transaction fee less than $0.01.
Ripplenet does not have Proof of Work mining: the list of transaction validator server IPs is maintained by pull request merge consensus in the GitHub repo.
The global Visa network claims to do something like 60,000 TPS. Bitcoin can do 6-7 TPS, and is even slower if you try and build it without blocks.
I thought I read that a stellar benchmark reached 10,000 TPS but they predicted that the TPS would be significantly greater with faster more expensive validation servers.
E.g. Crypto Kitties NFT smart contract game effectively DoS'd pre-sharding Ethereum, which originally did 15-30 TPS IIRC. Ethereum 2.0 reportedly intends to handles 100,000 TPS.
US Contractor payees would probably want to receive a stablecoin instead of a cryptoasset with high volatility.
Some citizens received a relief check to cash out or deposit, and others received a debit card for an account created for them.
I've heard that the relief loan program is the worst fraud in the history of the US government. Could any KYC or AML practices also help prevent such fraud? Does uploading a scan of a photo ID and/or routing and account numbers on a cheque make exchanges more accountable?
FWIU, only Canadian banks give customers the option to require approval for all deposits. Account holders do not have the option to deny deposits in the US, FWIU.
I don't think the US Government can acquire USDC. Awhile back stablecoin providers were audited and admonished.
A reasonable person should expect US Government backing of a cryptoasset to reduce volatility.
Large investment banks claimed to be saving the day on cryptoasset volatility.
High-frequency market makers claim to be creating value by creating liquidity at volatile prices.
They eventually added shorting to Bitcoin, which doesn't account for debt obligations; there is no debt within the Bitcoin network: either a transaction clears within the confirmation time or it doesn't.
There are no chargebacks in Bitcoin; a refund is an optional transaction between B and A, possibly with the same amount less fees.
There is no automatic rebilling in Bitcoin (and by extension other blockchains) because the payer does not disclose the private key necessary to withdraw funds in their account to payees.
Escrow can be done with multisig ("multi signature") transactions or with smart contracts; if at least e .g. 2 out of 3 parties approve, the escrowed transaction completes. So if Alice escrows $100 for Bob conditional upon receipt of a product from Bob, and Bob says she sent it and third-party Charlie says it was received, that's 2 out of 3 approving so Alice's $100 would then be sent to Bob.
All blockchains must eventually hard fork to PQ Post Quantum hashing and encryption, or keep hard forking to keep doubling non-PQ key sizes (if they are not already PQ).
PQ Post Quantum algos typically have a different number of characters, so any hard fork to PQ account keys and addresses will probably require changing data validation routines in webapps that handle transactions.
The coinbase field in a Bitcoin transaction struct can be used for correlating between blockchain transactions and rows in SQL database that claim to have valid data or metadata about a transaction; you put a unique signed value in the coinbase field when you create transactions, and your e.g. SQL or Accumulo database references the value stored in the coinbase field as a foreign key.
Crypto tax prep services can't just read transactions from public blockchains; they need exchange API access to get the price of the asset on that exchange at the time of that transaction: there's no on-chain price oracle.
"ILP Addresses, [Payment Pointers], and Blockcerts" https://github.com/blockchain-certificates/cert-issuer/issue... :
> How can or should a Blockcert indicate an ILP Interledger Protocol address or a Payment Pointer?
ILP Addresses:
g.acme.bob
g.us-fed.ach.0.acmebank.swx0a0.acmecorp.sales.199.~ipr.cdfa5e16-e759-4ba3-88f6-8b9dc83c1868.2
Payment Pointer -> URLS: $example.com -> https://example.com/.well-known/pay
$example.com/invoices/12345 -> https://example.com/invoices/12345
$bob.example.com -> https://bob.example.com/.well-known/pay
$example.com/bob -> https://example.com/bobRevolutionizing software testing: Introducing LLM-powered bug catchers
ScholarlyArticle: "Mutation-Guided LLM-based Test Generation at Meta" (2025) https://arxiv.org/abs/2501.12862v1
Good call, thanks for linking the research paper directly here.
Can this unit test generation capability be connected to the models listed on the SWE-bench [Multimodal] leaderboard?
I'm currently working on running an agent through SWE-Bench (RA.Aid).
What do you mean by connecting the test generation capability to it?
Do you mean generating new eval test cases? I think it could potentially have a use there.
OpenWISP: Multi-device fleet management for OpenWrt routers
Anything similar for opnsense (besides their own service) or pfsense?
Maybe just go with ansible or similar: https://github.com/ansibleguy/collection_opnsense
Updating a fleet of embedded devices like routers (which can come online and go offline at any time) will generally be much easier using a pull-based update model. But if you’ve got control over the build and update lifecycle, a push-based approach like ansible might be appropriate.
Maybe I am missing somehing, but I would assume that base network infrastructure like routers, firewalls and switches have a higher uptime, availability and reliability than ordinary servers.
The problem with push is that the service sitting at the center needs to figure out which devices will need to be re-pushed later on. You can end up with a lot of state that needs action just to get things back to normal.
So if you can convince devices to pull at boot time and then regularly thereafter, you know that the three states they can be in are down, good, or soon to be good. Now you only need to take action when things are down.
Never analyze distribution of software and config based on the perfect state; minimize the amount of work you need to do for the exceptions.
Unattended upgrades fail and sit there requiring manual intervention (due to lack of transactional updates and/or multiple flash slots (root partitions and bootloader configuration)).
Pull style configuration requires the device to hold credentials in order to authorize access to download the new policy set.
It's possible to add an /etc/init.d that runs sysupgrade on boot, install Python and Ansible, configure and confirm remote logging, and then run `ansible-pull`.
ansible-openwrt eliminates the need to have Python on a device: https://github.com/gekmihesg/ansible-openwrt
But then log collection; unless all of the nodes have correctly configured log forwarding at each stage of firmware upgrade, pull-style configuration management will lose logs that push-style configuration management can easily centrally log.
Pull based updates would work on OpenWRT devices if they had enough storage, transactional updates and/or multiple flash slots, and scheduled maintenance windows.
OpenWRT wiki > Sysupgrade: https://openwrt.org/docs/techref/sysupgrade
Calculating Pi in 5 lines of Python
> Infinite series can't really be calculated to completion using a computer,
The sum of an infinite divergent series cannot be calculated with or without a computer.
The sum of an infinite convergent series can be calculated with:
1/(a-r)
Sequence > Limits and convergence:
https://en.wikipedia.org/wiki/Sequence#Limits_and_convergenc...Limit of a sequence: https://en.wikipedia.org/wiki/Limit_of_a_sequence
SymPy docs > Limits of Sequences: https://docs.sympy.org/latest/modules/series/limitseq.html
> Provides methods to compute limit of terms having sequences at infinity.
Madhava-Leibniz formula for π: https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80
Eco-friendly artificial muscle fibers can produce and store energy
> The team utilized poly(lactic acid) (PLA), an eco-friendly material derived from crop-based raw materials, and highly durable bio-based thermoplastic polyurethane (TPU) to develop the artificial muscle fibers that mimic the functional and real muscles.
"Energy harvesting and storage using highly durable Biomass-Based artificial muscle fibers via shape memory effect" (2025) https://www.sciencedirect.com/science/article/abs/pii/S13858...
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
>> 583 Wh/kg
But just graphene presumably doesn't work in these applications due to lack of tensilility like certain natural fibers?
Harmonic Loss Trains Interpretable AI Models
"Harmonic Loss Trains Interpretable AI Models" (2025) https://arxiv.org/abs/2502.01628
Src: https://github.com/KindXiaoming/grow-crystals :
> What is Harmonic Loss?
Cross Entropy: https://en.wikipedia.org/wiki/Cross-entropy
XAI: Explainable AI > Interpretability: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Right to explanation: https://en.wikipedia.org/wiki/Right_to_explanation
Government planned it 7 years, beavers built a dam in 2 days and saved $1M
Oracle justified its JavaScript trademark with Node.js–now it wants that ignored
Calling EMCAScript JavaScript was a huge mistake that is still biting us.
OR alternatively, just not coming up with a better alternative name that ECMAScript. If there was a catchier alternative name that was less awkward to pronounce, people might more happily have switched over.
"JS" because of the .js file extension.
ECMAScript version history: https://en.wikipedia.org/wiki/ECMAScript_version_history
"Java" is an island in Indonesia associated with coffee beans from the Dutch East Indies that Sun Microsystems named their portable software after.
Coffee production in Indonesia: https://en.wikipedia.org/wiki/Coffee_production_in_Indonesia... :
> Certain estates age a portion of their coffee for up to five years, normally in large burlap sacks, which are regularly aired, dusted, and flipped.
Build your own SQLite, Part 4: reading tables metadata
It's interesting to compare this series to the actual source code of sqlite. For example, sqlite uses a LALR parser generator: https://github.com/sqlite/sqlite/blob/master/src/parse.y#L19...
And queries itself to get the schema: https://github.com/sqlite/sqlite/blob/802b042f6ef89285bc0e72...
Lots of questions, but the main one is whether we have made any progress with these new toolchains and programming languages w/ respect to performance or robustness. And that may be unfair to ask of what is a genuinely useful tutorial.
If you don’t know it already, you’ll probably be interested in limbo: https://github.com/tursodatabase/limbo
It’s much more ambitious/complete than the db presented in the tutorial.
If memory serves me correctly, it uses the same parser generator as SQLite, which may answer some your questions.
Is translation necessary to port the complete SQLite test suite?
sqlite/sqlite//test: https://github.com/sqlite/sqlite/tree/master/test
tursodatabase/limbo//testing: https://github.com/tursodatabase/limbo/tree/main/testing
ArXiv LaTeX Cleaner: Clean the LaTeX code of your paper to submit to ArXiv
It's really a pity that they do this now. Some of their older papers had actually quite some valuable information, comments, discussions, thoughts, even commented out sections, figures, tables in it. It gave a much better view on how the paper was written over time, or how even the work processed over time. Sometimes you also see some alternative titles being discussed, which can be quite funny.
E.g. from https://arxiv.org/abs/1804.09849:
%\title{Sequence-to-Sequence Tricks and Hybrids\\for Improved Neural Machine Translation} % \title{Mixing and Matching Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Analyzing and Optimizing Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Frankenmodels for Improved Neural Machine Translation} % \title{Optimized Architectures and Training Strategies\\for Improved Neural Machine Translation} % \title{Hybrid Vigor: Combining Traits from Different Architectures Improves Neural Machine Translation}
\title{The Best of Both Worlds: \\Combining Recent Advances in Neural Machine Translation\\ ~}
Also a lot of things in the Attention is all you need paper: https://arxiv.org/abs/1706.03762v1
Maybe papers need to be put under version control.
Quantum Bayesian Inference with Renormalization for Gravitational Waves
ScholarlyArticle: "Quantum Bayesian Inference with Renormalization for Gravitational Waves" (2025) https://iopscience.iop.org/article/10.3847/2041-8213/ada6ae
NewsArticle: "Black Holes Speak in Gravitational Waves, Heard Through Quantum Walks" (2025) https://thequantuminsider.com/2025/01/29/black-holes-speak-i... :
> Unlike classical MCMC, which requires a large number of iterative steps to converge on a solution, QBIRD uses a quantum-enhanced Metropolis algorithm that incorporates quantum walks to explore the parameter space more efficiently. Instead of sequentially evaluating probability distributions one step at a time, QBIRD encodes the likelihood landscape into a quantum Hilbert space, allowing it to assess multiple transitions between parameter states simultaneously. This is achieved through a set of quantum registers that track state evolution, transition probabilities, and acceptance criteria using a modified Metropolis-Hastings rule.
> Additionally, QBIRD incorporates renormalization and downsampling, which progressively refine the search space by eliminating less probable regions and concentrating computational resources on the most likely solutions. These techniques enable QBIRD to achieve accuracy comparable to classical MCMC while reducing the number of required samples and computational overhead, making it a more promising approach for gravitational wave parameter estimation as quantum hardware matures.
Parameter estimation algorithms:
"Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 .. https://news.ycombinator.com/item?id=40396171
"Robustly learning Hamiltonian dynamics of a superconducting quantum processor" (2024) https://www.nature.com/articles/s41467-024-52629-3 .. https://news.ycombinator.com/item?id=42086445
Can Large Language Models Emulate Judicial Decision-Making? [Paper]
An actor can emulate the communication style of judicial decision language, sure.
But the cost of a wrong answer (wrongful conviction) exceeds a threshold of ethical use.
> We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.
From "Asking 60 LLMs a set of 20 questions" https://news.ycombinator.com/item?id=37451642 :
> From https://news.ycombinator.com/item?id=36038440 :
>> Awesome-legal-nlp links to benchmarks like LexGLUE and FairLex but not yet LegalBench; in re: AI alignment and ethics / regional law
>> A "who hath done it" exercise
>> "For each of these things, tell me whether Gdo, Others, or You did it"
AI should never be judge, jury, and executioner.
Homotopy Type Theory
type theory notes: https://news.ycombinator.com/item?id=42440016#42444882
HoTT in Lean 4: https://github.com/forked-from-1kasper/ground_zero
I Wrote a WebAssembly VM in C
This is great! The WebAssembly Core Specification is actually quite readable, although some of the language can be a bit intimidating if you're not used to reading programming language papers.
If anyone is looking for a slightly more accessible way to learn WebAssembly, you might enjoy WebAssembly from the Ground Up: https://wasmgroundup.com
(Disclaimer: I'm one of the authors)
I know one of WebAssembly's biggest features by design is security / "sandbox".
But I've always gotten confused with... it is secure because by default it can't do much.
I don't quite understand how to view WebAssembly. You write in one language, it compiles things like basic math (nothing with network or filesystem) to another and it runs in an interpreter.
I feel like I have a severe lack/misunderstanding. There's a ton of hype for years, lots of investment... but it isn't like any case where you want to add Lua to an app you can add WebAssembly/vice versa?
WebAssembly can communicate through buffers. WebAssembly can also import foreign functions (Javascript functions in the browser).
You can get output by reading the buffer at the end of execution/when receiving callbacks. So, for instance, you pass a few frames worth of buffers to WASM, WASM renders pixels into the buffers, calls a callback, and the Javascript reads data from the buffer (sending it to a <canvas> or similar).
The benefit of WASM is that it can't be very malicious by itself. It requires the runtime to provide it with exported functions and callbacks to do any file I/O, network I/O, or spawning new tasks. Lua and similar tools can go deep into the runtime they exist in, altering system state and messing with system memory if they want to, while WASM can only interact with the specific API surface you provide it.
That makes WASM less powerful, but more predictable, and in my opinion better for building integrations with as there is no risk of internal APIs being accessed (that you will be blamed for if they break in an update).
WASI Preview 1 and WASI Preview 2 can do file and network I/O IIUC.
Re: tty support in container2wasm and fixed 80x25 due to lack of SIGWINCH support in WASI Preview 1: https://github.com/ktock/container2wasm/issues/146
The File System Access API requires granting each app access to each folder.
jupyterlab-filesystem-access only works with Chromium based browsers, because FF doesn't support the File System Access API: https://github.com/jupyterlab-contrib/jupyterlab-filesystem-...
The File System Access API is useful for opening a local .ipynb and .csv with JupyterLite, which builds CPython for WASM as Pyodide.
There is a "Direct Sockets API in Chrome 131" but not in FF; so WebRTC and WebSocket relaying is unnecessary for WASM apps like WebVM: https://news.ycombinator.com/item?id=42029188
WASI Preview 2: https://github.com/WebAssembly/WASI/blob/main/wasip2/README.... :
> wasi-io, wasi-clocks, wasi-random, wasi-filesystem, wasi-sockets, wasi-cli, wasi-http
US bill proposes jail time for people who download DeepSeek
That would disincentive this type of research, for example:
"DeepSeek's Hidden Bias: How We Cut It by 76% Without Performance Loss" (2025) https://news.ycombinator.com/item?id=42868271
https://news.ycombinator.com/item?id=42891042
TIL about BBQ: Bias Benchmark for QA
"BBQ: A Hand-Built Bias Benchmark for Question Answering" (2021) https://arxiv.org/abs/2110.08193
Waydroid – Android in a Linux container
Surprised to see this on the frontpage - it's a well known piece of software.
It's unfortunate that there are no Google-vended images (e.g. the generic system image) that run on Waydroid. Typing my password into random ROMs from the internet sketches me out.
I wouldn't say it runs a "random ROM from the internet" - LineageOS is a very well-established project and is fully FOSS (free and open source software) except for firmware necessary for specific devices. It is the natural choice for any project, such as Waydroid, that requires good community support and ongoing availability.
Over a number of years, Google have progressively removed many of the original parts of AOSP (the FOSS foundation upon which Android is based), which means that alternative components have to be developed by projects like LineageOS. In spite of this, I suspect that LineageOS makes fewer modifications to AOSP than most phone vendors do, including Google themselves!
A few things that seem like they're consistently missing from these projects: Hardware 3d acceleration from the host in a version of OpenGL ES + Vulkan that most phones have natively. Lastly, many apps have built-in ways of detecting that they're not running on a phone and ditch out (looking at cpuinfo and referencing that with the purported device being run).
It also seems that expected arm support on device is increasing (along with expected throughput) and that the capability of the x86 host device you need to emulate even a modest modern mobile ARM soc is getting higher and higher.
Lastly, the android version supported is almost always 3-4 generations behind the current Android. Apps are quick to drop legacy android support or run with fewer features/less optimizations on older versions of the OS. The android base version in this project is from 2020.
Anecdotally, using bluestacks (which indisputably has the most compatible and optimized emulation stack in the entire space) with a 7800X3D / RTX 3090 still runs most games slower than a snapdragon 8 phone from yesteryear running natively.
virtio-gpu rutabaga was recently added to QEMU IIUC mostly by Google for Chromebook Android emulation or Android Studio or both?
virtio-gpu-rutabaga: https://www.qemu.org/docs/master/system/devices/virtio-gpu.h...
Rutabaga Virtual Graphics Interface: https://crosvm.dev/book/appendix/rutabaga_gfx.html
gfxstream: https://android.googlesource.com/platform/hardware/google/gf...
"Gfxstream Merged Into Mesa For Vulkan Virtualization" (2024-09) https://www.phoronix.com/news/Mesa-Gfxstream-Merged
I don't understand why there is not an official x86 container / ROM for Android development? Do CI builds of Android apps not run tests with recent versions of Android? How do CI builds of APKs run GUI tests without an Android container?
There is no official support for x86 in android any more - the Android-x86 project was the last I know that supported/maintained it. Last release was 2022.
For apps that use Vulkan natively, it's easy - but many still use and rely on OpenGL ES. It's a weird scenario where you have apps that are now supporting Vulkan, but they have higher minimum OS requirements as a result... but those versions of Android aren't supported by these type of projects.
37D boundary of quantum correlations with a time-domain optical processor
"Exploring the boundary of quantum correlations with a time-domain optical processor" (2025) https://www.science.org/doi/10.1126/sciadv.abd8080 .. https://arxiv.org/abs/2208.07794v3 :
> Abstract: Contextuality is a hallmark feature of the quantum theory that captures its incompatibility with any noncontextual hidden-variable model. The Greenberger--Horne--Zeilinger (GHZ)-type paradoxes are proofs of contextuality that reveal this incompatibility with deterministic logical arguments. However, the GHZ-type paradox whose events can be included in the fewest contexts and which brings the strongest nonclassicality remains elusive. Here, we derive a GHZ-type paradox with a context-cover number of three and show this number saturates the lower bound posed by quantum theory. We demonstrate the paradox with a time-domain fiber optical platform and recover the quantum prediction in a 37-dimensional setup based on high-speed modulation, convolution, and homodyne detection of time-multiplexed pulsed coherent light. By proposing and studying a strong form of contextuality in high-dimensional Hilbert space, our results pave the way for the exploration of exotic quantum correlations with time-multiplexed optical systems.
New thermogalvanic tech paves way for more efficient fridges
"Solvation entropy engineering of thermogalvanic electrolytes for efficient electrochemical refrigeration" (2025) https://www.cell.com/joule/fulltext/S2542-4351(25)00003-0
Thanks for that, it's great that its 10x better than the previous best effort. It's notable that it needs to get 20x better again before it starts to have useful applications :-).
Someday maybe!
Quantum thermal diodes: https://news.ycombinator.com/item?id=42537703
From https://news.ycombinator.com/item?id=38861468 :
> Laser cooling: https://en.wikipedia.org/wiki/Laser_cooling
> Cooling with LEDs in reverse: https://issuu.com/designinglighting/docs/dec_2022/s/17923182 :
> "Near-field photonic cooling through control of the chemical potential of photons" (2019) https://www.nature.com/articles/s41586-019-0918-8
High hopes, low expectations :-). That said, 'quantum thermal diode' sounds like something that breaks on your space ship and if you don't fix it you won't be able to get the engines online to outrun the aliens. Even after reading the article I think the unit of measure there should be milli-demons in honor of Maxwell's Demon.
Emergence of a second law of thermodynamics in isolated quantum systems
ScholarlyArticle: "Emergence of a Second Law of Thermodynamics in Isolated Quantum Systems" (2025) https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan...
NewsArticle: "Even Quantum Physics Obeys the Law of Entropy" https://www.tuwien.at/en/tu-wien/news/news-articles/news/auc...
NewsArticle: "Sacred laws of entropy also work in the quantum world, suggests study" ... "90-year-old assumption about quantum entropy challenged in new study" https://interestingengineering.com/science/entropy-also-work...
Polarization-dependent photoluminescence of Ce-implanted MgO and MgAl2O4
NewsArticle: "Scientists explore how to make quantum bits with spinel gemstones" (2025) https://news.uchicago.edu/story/scientists-explore-how-make-... :
> A type of gemstone called spinel can be used to store quantum information, according to new research from a collaboration involving University of Chicago, Tohoku University, and Argonne National Laboratory.
3D scene reconstruction in adverse weather conditions via Gaussian splatting
Large Language Models for Mathematicians (2023)
It makes sense for LLMs to work with testable code for symbolic mathematics; CAS Computer Algebra System code instead of LaTeX which only roughly corresponds.
Are LLMs training on the AST parses of the symbolic expressions, or token coocurrence? What about training on the relations between code and tests?
Benchmarks for math and physics LLMs: FrontierMath, TheoremQA, Multi SWE-bench: https://news.ycombinator.com/item?id=42097683
Large language models think too fast to explore effectively
Maps well to Kahneman's "Thinking Fast and Slow" framework
system 1 thinking for early layer processing of uncertainty in LLMs. quick, intuitive decisions, focuses on uncertainty, happens in early transformer layers.
system 2 thinking for later layer processing of empowerment (selecting elements that maximize future possibilities). strategic, deliberate evaluation, considering long-term possibilities, happens in later layers.
system 1 = 4o/llama 3.1
system 1 + system 2 = o1/r1 reasoning models
empowerment calculation seems possibly oversimplified - assumes a static value for elements over a dynamic context-dependent empowerment
interesting that higher temperatures improved performance slightly for system 1 models although they still made decisions before empowerment information could influence them
edit: removed the word "novel". The paper shows early-layer processing of uncertainty vs later-layer processing of empowerment.
Stanovich proposes a three tier model http://keithstanovich.com/Site/Research_on_Reasoning_files/S...
Modeling an analog system like human cognition into any number of discrete tiers is inherently kind of arbitrary and unscientific. I doubt you could ever prove experimentally that all human thinking works through exactly two or three or ten or whatever number of tiers. But it's at least possible that a three-tier model facilitates building AI software which is "good enough" for many practical use cases.
Funny you should say that because an American guy did that a hundred years ago and nailed it.
He divided reasoning into the two categories corollarial and theorematic.
Charles Sanders Peirce > Pragmatism > Theory of inquiry > Scientific Method: https://en.wikipedia.org/wiki/Charles_Sanders_Peirce#Scienti...
Peirce’s Deductive Logic: https://plato.stanford.edu/entries/peirce-logic/
Scientific Method > 2. Historical Review: Aristotle to Mill https://plato.stanford.edu/entries/scientific-method/#HisRev...
Scientific Method: https://en.wikipedia.org/wiki/Scientific_method
Reproducibility: https://en.wikipedia.org/wiki/Reproducibility
Replication crisis: https://en.wikipedia.org/wiki/Replication_crisis
TFS > Replication crisis https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
"Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35879007 :
Lateralization of brain function: https://en.wikipedia.org/wiki/Lateralization_of_brain_functi...
Ask HN: Percent of employees that benefit financially from equity offers?
Title says it all. Equity offers are a very common thing in tech. I don't personally know anyone who has made money from equity offers, though nearly all my colleagues have received them at some point.
Does anyone have real data on how many employees actually see financial upside from equity grants? Are there studies or even anecdotal numbers on how common it is for non-executives/non-founders to walk away with any money? Specifically talking about privately held US startups.
From https://news.ycombinator.com/item?id=29141796 :
> There are a number of options/equity calculators:
> https://tldroptions.io/ ("~65% of companies will never exit", "~15% of companies will have low exits", "~20% of companies will make you money")
Ultra-fast picosecond real-time observation of optical quantum entanglement
"Real-time observation of picosecond-timescale optical quantum entanglement towards ultrafast quantum information processing" (2025) https://www.nature.com/articles/s41566-024-01589-7 .. https://arxiv.org/abs/2403.07357v1 (2024) :
> Abstract: Entanglement is a fundamental resource for various optical quantum information processing (QIP) applications. To achieve high-speed QIP systems, entanglement should be encoded in short wavepackets. Here we report the real-time observation of ultrafast optical Einstein–Podolsky–Rosen correlation at a picosecond timescale in a continuous-wave system. Optical phase-sensitive amplification using a 6-THz-bandwidth waveguide-based optical parametric amplifier enhances the effective efficiency of 70-GHz-bandwidth homodyne detectors, mainly used in 5G telecommunication, enabling its use in real-time quantum state measurement. Although power measurement using frequency scanning, such as an optical spectrum analyser, is not performed in real time, our observation is demonstrated through the real-time amplitude measurement and can be directly used in QIP applications. The observed Einstein–Podolsky–Rosen states show quantum correlation of 4.5 dB below the shot-noise level encoded in wavepackets with 40 ps period, equivalent to 25 GHz repetition—103 times faster than previous entanglement observation in continuous-wave systems. The quantum correlation of 4.5 dB is already sufficient for several QIP applications, and our system can be readily extended to large-scale entanglement. Moreover, our scheme has high compatibility with optical communication technology such as wavelength-division multiplexing, and femtosecond-timescale observation is also feasible. Our demonstration is a paradigm shift in accelerating accessible quantum correlation—the foundational resource of all quantum applications—from the nanosecond to picosecond timescales, enabling ultrafast optical QIP.
Recipe Database with Semantic Search on Digital Ocean's Smallest VM
Datasette-lite supports SQLite in WASM in a browser.
DuckDB WASM would also solve for a recipe database without an application server, for example in order to reduce annual hosting costs of a side project.
Is the scraped data CC-BY-SA licensed? Attribution would be good.
/? datasette vector search
FAISS
WASM vector database: https://www.google.com/search?q=WASM+vector+database
Moiré-driven topological electronic crystals in twisted graphene
ScholarlyArticle: "Moiré-driven topological electronic crystals in twisted graphene" (2025) https://www.nature.com/articles/s41586-024-08239-6
NewsArticle: "Anomalous Hall crystal made from twisted graphene" (2025) https://physicsworld.com/a/anomalous-hall-crystal-made-from-...
Adding concurrent read/write to DuckDB with Arrow Flight
Just sanity checking here - with flight write streams to duckdb, I'm guessing there is no notion of transactional boundary here, so if we want data consistency during reads, that's another level of manual app responsibilities? And atomicity is there, but at the single record batch or row group level?
Ex: if we have a streaming financial ledger as 2 tables, that is 2 writes, and a reader might see an inconsistent state of only 1 write
Ex: streaming ledger as one table, and the credit+debit split into 2 distanced rowgroups, same inconsistency?
Ex: in both cases, we might have the server stream back an ack of what was written, so we could at least get a guarantee of which timestamps are fully written for future reads, and queries can manually limit to known-complete intervals
We are looking at adding streaming writes to GFQL, an open source columnar (arrow-native) CPU/GPU graph query language, where this is the same scenario: appends mean updating both the nodes table and the edges table
Yes, reading this post (working around a database's concurrency control) made me raise an eyebrow. If you are ok with inconsistent data then that's fine. Or if you handle consistency at a higher level that's fine too. But if either of these are the case why would you be going through DuckDB? You could write out Parquet files directly?
cosmos/iavl is a Merkleized AVL tree.
https://github.com/cosmos/iavl :
> Merkleized IAVL+ Tree implementation in Go
> The purpose of this data structure is to provide persistent storage for key-value pairs (say to store account balances) such that a deterministic merkle root hash can be computed. The tree is balanced using a variant of the AVL algorithm so all operations are O(log(n)).
Integer Vector clock or Merkle hashes?
Why shouldn't you store account balances in git, for example?
Or, why shouldn't you append to Parquet or Feather and LZ4 for strongly consistent transactional data?
Centralized databases can have Merkle hashes, too;
"How Postgres stores data on disk" https://news.ycombinator.com/item?id=41163785 :
> Those systems index Parquet. Can they also index Feather IPC, which an application might already have to journal and/or log, and checkpoint?
DLT applications for strong transactional consistency sign and synchronize block messages and transaction messages.
Public blockchains have average transaction times and costs.
Private blockchains also have TPS Transactions Per Second metrics, and unknown degrees of off-site redundancy for consistent storage with or without indexes.
Blockchain#Openness: https://en.wikipedia.org/wiki/Blockchain#Openness :
> An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain. [46][47][48][49][50] Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases. [51] Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain. [52]
> Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision. [46][48] Nikolai Hampton of Computerworld said that "many in-house blockchain solutions will be nothing more than cumbersome databases," and "without a clear security model, proprietary blockchains should be eyed with suspicion." [10][53]
Merkle Town: https://news.ycombinator.com/item?id=38829274 :
> How CT works > "How CT fits into the wider Web PKI ecosystem": https://certificate.transparency.dev/howctworks/
From "PostgreSQL Support for Certificate Transparency Logs Now Available" https://news.ycombinator.com/item?id=42628223 :
> Are there Merkle hashes between the rows in the PostgreSQL CT store like there are in the Trillian CT store?
> Sigstore Rekor also has centralized Merkle hashes.
I think you replied in the wrong post.
No, I just explained how the world does strongly consistent distributed databases for transactional data, which is the exact question here.
DuckDB does not yet handle strong consistency. Blockchains and SQL databases do.
Blockchains are a fantastic way to run things slowly ;-) More seriously: Making crypto fast does sound like a fun technical challenge, but well beyond what our finance/gov/cyber/ai etc customers want us to do.
For reference, our goal here is to run around 1 TB/s per server, and many times more when a beefier server. Same tech just landed at spot #3 on the graph 500 on its first try.
To go even bigger & faster, we are looking for ~phd intern fellows to run on more than one server, if that's your thing: OSS GPU AI fellowship @ https://www.graphistry.com/careers
The flight perspective aligns with what we're doing. We skip the duckdb CPU indirections (why drink through a long twirly straw?) and go straight to arrow on GPU RAM. For our other work, if duckdb does gives reasonable transactional guarantees here, that's interesting... hence my (in earnest) original question. AFAICT, the answers are resting on operational answers & docs that don't connect to how we normally talk about databases giving you answers on consistent vs inconsistent views of data.
Do you think that blockchain engineers are incapable of developing high throughout distributed systems due to engineering incapacity or due to real limits to how fast a strongly consistent, sufficiently secured cryptographic distributed system can be? Are blockchain devs all just idiots, or have they dumbly prioritized data integrity because that doesn't matter it's about big data these days, nobody needs CAP?
From "Rediscovering Transaction Processing from History and First Principles" https://news.ycombinator.com/item?id=41064634 :
> metrics: Real-Time TPS (tx/s), Max Recorded TPS (tx/s), Max Theoretical TPS (tx/s), Block Time (s), Finality (s)
> Other metrics: FLOPS, FLOPS/WHr, TOPS, TOPS/WHr, $/OPS/WHr
TB/s in query processing of data already in RAM?
/? TB/s "hnlog"
- https://news.ycombinator.com/item?id=40423020 , [...] :
> The HBM3E Wikipedia article says 1.2TB/s.
> Latest PCIe 7 x16 says 512 GB/s:
fiber optics: 301 TB/s (2024-05)
Cerebras: https://en.wikipedia.org/wiki/Cerebras :
WSE-2 on-chip SRAM memory bandwidth: 20 PB/s / 220 PB/S
WSE-3: 21 PB/S
HBM > Technology: https://en.wikipedia.org/wiki/High_Bandwidth_Memory#Technolo... :
HBM3E: 9.8 Gbit/s , 1229 Gbyte/s (2023)
HBM4: 6.4 Gbit/s , 1638 Gbyte/s (2026)
LPDDR SDRAM > Generations: https://en.wikipedia.org/wiki/LPDDR#Generations :
LPDDR5X: 1,066.63 MB/S (2021)
GDDR7: https://en.m.wikipedia.org/wiki/GDDR7_SDRAM
GDDR7: 32 Gbps/pin - 48 Gbps/pin,[11] and chip capacities up to 64 Gbit, 192 GB/s
List of interface bit rates: https://en.wikipedia.org/wiki/List_of_interface_bit_rates :
PCIe7 x16: 1.936 Tbit/s 242 GB/s (2025)
800GBASE-X: 800 Gbps (2024)
DDR5-8800: 70.4 GB/s
Bit rate > In data communications: https://en.wikipedia.org/wiki/Bit_rate# In_data_communications ; Gross and Net bit rate, Information rate, Network throughout, Goodput
Re: TPUs, NPUs, TOPS: https://news.ycombinator.com/item?id=42318274 :
> How many TOPS/W and TFLOPS/W? (T [Float] Operations Per Second per Watt (hour ?))*
Top 500 > Green 500: https://www.top500.org/lists/green500/2024/11/ :
PFlop/s (Rmax)
Power (kW)
GFlops/watts (Energy Efficiency)
Performance per watt > FLOPS/watts: https://en.wikipedia.org/wiki/Performance_per_watt#FLOPS_per...
Electrons: 50%–99% of c the speed of light ( Speed of electricity: https://en.wikipedia.org/wiki/Speed_of_electricity , Velocity factor of a CAT-7 cable: https://en.wikipedia.org/wiki/Velocity_factor#Typical_veloci... )
Photons: c (*)
Gravitational Waves: Even though both light and gravitational waves were generated by this event, and they both travel at the same speed, the gravitational waves stopped arriving 1.7 seconds before the first light was seen ( https://bigthink.com/starts-with-a-bang/light-gravitational-... )
But people don't do computation with gravitational waves.
To a reasonable rounding error.. yes
How would you recommend that appends to Parquet files be distributedly synchronized with zero trust?
Raft, Paxos, BFT, ... /? hnlog paxos ... this about "50 years later, is two-phase locking the best we can do?" https://news.ycombinator.com/item?id=37712506
To have consensus about protocol revisions; To have data integrity and consensus about the merged sequence of data in database {rows, documents, named graphs, records,}.
"We're building a new static type checker for Python"
Could it please also do runtime type checking?
(PyContracts and iContract do runtime type checking, but it's not very performant.)
That MyPy isn't usable at runtime causes lots of re-work.
Have you tried beartype? It's worked well for me and has the least overhead of any other runtime type checker.
I think TypeGuard (https://github.com/agronholm/typeguard) also does runtime type checking. I use beartype BTW.
pycontracts: https://github.com/AlexandruBurlacu/pycontracts
icontract: https://github.com/Parquery/icontract
The DbC Design-by-Contract patterns supported by icontract probably have code quality returns beyond saving work.
Safety critical coding guidelines specify that there must be runtime type and value checks at the top of every function.
Is there a dilatant fluid / superfluid quantum gravity explanation for this?
The viscosity in a superfluid is zero.
I just sent an email to the authors. From https://gemini.google.com/share/0b906914bcfb :
> If we apply Fedi's "Modified Stokes' Law" to the RX J0528+2838 observation, the interpretation shifts from a magnetic propeller effect to a vacuum friction effect. [...]
> Hypothesis: The "persistent bow shock" is not just the star pushing gas, but the star’s high-velocity (or high-rotation) magnetic field creating shear stress on the quantum vacuum itself
How to test whether MHD or SQR best explain the given phenomena?
> Measure: Precise timing of the binary's orbital period (currently ~80 minutes) over the next 5–10 years.
> Orbit decays exactly as General Relativity predicts -> MHD favored.
> Orbit decays significantly faster (anomalous braking) -> Fedi [SQR, dilatant fluid] and/or Alternative Physics favored.
..
> How to Measure: Map the density of the Interstellar Medium (ISM) around the star.
> Result A: The shock brightness correlates perfectly with patches of dense gas -> MHD favored (Gas hitting Gas).
> Result B: The shock remains bright even in "empty" voids where there is no gas to shock, implying the "medium" is space itself -> Fedi favored.
Are these good tests of MHD vs SQR?
> How to Measure: Observe a background transient event (like a distant quasar or burst) passing through the bow shock. Check for time-of-arrival delays between X-rays and Radio waves.