Contents^
Items^
Graphene is free when you flash heat unsorted recycled plastic and sell or use the Hydrogen.
Graphene can be produced from CO2.
CO2 is overly-abundant and present in emissions that need to be filtered anyway.
What types of graphene and other forms of carbon do not conduct electricity, are biodegradable , and would be usable as a graphene PCB for semiconductors and superconductors?
Graphene Oxide (low cost of production), Graphane (hydrogen; high cost of production), Diamond (lowering cost of production, also useful for NV QC nitrogen-vacancy quantum computing; probably in part due to the resistivity of the molecular lattice),
How could graphene oxide PCBs be made fire-proof?
Non-Conductive Flame Retardants: phosphorous, nitrogen (melamine,), intumescent systems, inorganic fillers
Is there a bio-based flame-retardant organic filler for [Graphene Oxide] PCBs?
Making PyPI's test suite faster
I get that pytest has features that unittest does not, but how is scanning for test files in a directory considered appropriate for what is called a high security application in the article?
For high security applications the test suite should be boring and straightforward. pytest is full of magic, which makes it so slow.
Python in general has become so complex, informally specified and bug ridden that it only survives because of AI while silencing critics in their bubble.
The complexity includes PSF development processes, which lead to:
https://www.schneier.com/blog/archives/2024/08/leaked-github...
strace is one way to determine how many stat calls a process makes.
Developers avoid refactoring costs by using dependency inversion, fixtures and functional test assertions without OO in the tests, too.
Pytest collection could be made faster with ripgrep and does it even need AST? A thread here mentions how it's possible to prepare a list of .py test files containing functions that start with "test_" to pass to the `pytest -k` option; for example with ripgrep.
One day I did too much work refactoring tests to minimize maintenance burden and wrote myself a functional test runner that captures AssertionErrors and outputs with stdlib only.
It's possible to use unittest.TestCase() assertion methods functionally:
assert 0 == 1
# AssertionError
import unittest
test = unittest.TestCase()
test.assertEqual(0, 1)
# AssertionError: 0 != 1
unittest.TestCase assertion methods
have default error messages, but the `assert` keyword does not.In order to support one file stdlib-only modules, I have mocked pytest.mark.parametrize a number of times.
chmp/ipytest is one way to transform `assert a == b` to `assertEqual(a,b)` like Pytest in Jupyter notebooks.
Python continues to top language use and popularity benchmarks.
Python is not a formally specified language, mostly does not have constant time operations (or documented complexity in docstring attrs), has a stackless variant, supported asynchronous coroutines natively before C++, now has some tail-call optimization in 3.14, now has nogil mode, and is GPU accelerated in many different ways.
How best could they scan for API tokens committed to public repos?
Brandon's Semiconductor Simulator
Which other simulators show electron charge density and heat dissipation?
Can this simulate this?:
"Synaptic and neural behaviours in a standard silicon transistor" (2025) https://www.nature.com/articles/s41586-025-08742-4 .. https://news.ycombinator.com/item?id=43506198
What about (graphene) superconductors though?
On my info page (https://brandonli.net/semisim/info) there's a list of things my simulation can and can't do. After taking a look at the paper you mentioned, I think simulating it may very well be possible, however it might take a bit of effort. As for graphene, its band structure is different enough that I don't think it would work.
Note that my simulation is intended for educational purposes only, not scientific research.
- Brandon
Thanks, quite the useful simulator; I hadn't found that page yet. Additional considerations for circuit simulators:
What does the simulator say about signal delay and/or propagation in electronic circuits and their fields? How long does it take for a lightbulb to turn on after a switch is thrown, given the length of the circuit and the real distance between points in it?
(I learned this gap in our understanding of electron behavior from this experiment, which had never been done FWIU: "How Electricity Actually Works" (2022) https://www.youtube.com/watch?v=oI_X2cMHNe0 )
FWIW, additionally:
Hall Effect and Quantum Anomalous Hall Effect;
"Tunable superconductivity and Hall effect in a transition metal dichalcogenide" (2025) https://news.ycombinator.com/item?id=43347319
ScholarlyArticle: "Moiré-driven topological electronic crystals in twisted graphene" (2025) https://www.nature.com/articles/s41586-024-08239-6
NewsArticle: "Anomalous Hall crystal made from twisted graphene" (2025) https://physicsworld.com/a/anomalous-hall-crystal-made-from-...
From "Single-chip photonic deep neural network with forward-only training" https://news.ycombinator.com/item?id=42314581 :
"Fractional quantum anomalous Hall effect in multilayer graphene" (2024) https://www.nature.com/articles/s41586-023-07010-7
"Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 .. https://news.ycombinator.com/item?id=39365579
> "Room-temperature quantum coherence of entangled multiexcitons in a metal-organic framework" (2024) https://www.science.org/doi/10.1126/sciadv.adi3147
Electrons (and photons and phonons and other fields of particles) are more complex than that though.
Google launches 'implicit caching' to make accessing latest AI models cheaper
Progress toward fusion energy gain as measured against the Lawson criteria
It should be noted that "breakeven" is often misleading.
There's "breakeven" as in "the reaction produces more energy than put into it", and there's breakeven as in "the entire reactor system produces more energy than put into it", which isn't quite the same thing.
In the laser business, the latter is called "wall plug efficiency," which is laser power out per electrical power in.
"Uptime Percentage", "Operational Availability" (OA), "Duty Cycle"
Availability (reliability engineering) https://en.wikipedia.org/wiki/Availability
Terms from other types of work: kilowatt/hour (kWh), Weight per rep, number of reps, Total Time Under Tension
Mass spectrometry method identifies pathogens within minutes instead of days
Would be interesting how much resolution they need for the diagnosis to be reliable.
Because high resolution mass spectrometers cost millions of dollars, and "minutes" for a diagnosis can mean that one spectrometer can only run 3 samples per hour - or 72 per day.
And while a research university can afford a million dollar spectrometer (and the grad students that run it), even a small hospital will create 72 bacterial swaps per hour - while absolutely not having the money to get 10 spectrometers with the corresponding technicians.
And the incumbent/competitor - standard bacterial cultures - is cheap!
MRI machines cost up to 0.5 million, and take more than minutes for a scan. So this is in the upper realm of reasonable. At this point it is an engineering problem to get costs down.
There are 0.05 Tesla MRI machines that almost work with a normal 15A 110V outlet now FWIU; https://news.ycombinator.com/item?id=40965068 :
> "Whole-body magnetic resonance imaging at 0.05 Tesla" [1800W] https://www.science.org/doi/10.1126/science.adm7168 .. https://news.ycombinator.com/item?id=40335170
Other emerging developments in __ spectroscopy:
/?hnlog Spectro:
NIRS;
> Are there implied molecular structures that can be inferred from low-cost {NIRS, Light field, [...]} sensor data?
NIRS would be low cost, but the wavelength compared to the sample size.
From https://news.ycombinator.com/item?id=38528844 :
> "Reversible optical data storage below the diffraction limit (2023)" [at cryogenic temperatures] https://news.ycombinator.com/item?id=38528844 :
> [...] have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
"Eye-safe laser technology to diagnose traumatic brain injury in minutes" https://news.ycombinator.com/item?id=38510092 :
> "Window into the mind: Advanced handheld spectroscopic eye-safe technology for point-of-care neurodiagnostic" (2023) https://www.science.org/doi/10.1126/sciadv.adg5431
> multiplex resonance Raman spectroscopy
Holotomographic imaging is yet another imaging method that could be less costly than MRI; https://news.ycombinator.com/item?id=40819864
"Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054 :
> "Terahertz spectroscopy of collective charge density wave dynamics at the atomic scale" (2024) https://www.nature.com/articles/s41567-024-02552-7
Microservices are a tax your startup probably can't afford
> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains. Before that? You’re paying the price without getting the benefit: duplicated infra, fragile local setups, and slow iteration. For example, Segment eventually reversed their microservice split for this exact reason — too much cost, not enough value.
Basically this. Microservices are a design pattern for organisations as opposed to technology. Sounds wrong but the technology change should follow the organisational breakout into multiple teams delivering separate products or features. And this isn't a first step. You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs e.g pdf creation is often a background task because of how long it takes to produce. Anyway after that you might end up with more services and then you have this sprawl of things where you start to think about standardisation, architecture patterns, etc. Before that it's a death sentence and if your business survives I'd argue it didn't because of microservices but inspite of them. The dev time lost in the beginning, say sub 200 engineers is significant.
Some resume driven developers will choose microservices for startups as a way to LARP a future megacorp job. Startup may fail, but they at least got some distributed system experience. It takes extremely savvy technical leadership to prevent this.
In my experience, it seems the majority of folks know the pitfalls of microservices, and have since like... 2016? Maybe I'm just blessed to have been at places with good engineering, technical leadership, and places that took my advice seriously, but I feel like the majority of folks I've interacted with all have experienced some horror story with microservices that they don't want to repeat.
Does [self-hosted, multi-tenant] serverless achieve similar separation of concerns in comparison to microservices?
Should the URLs contain a version; like /api/v1/ ?
FWIU OpenAPI API schema enable e.g. MCP service discovery, but not multi-API workflows or orchestrations.
(Edit: "The Arazzo Specification - A Tapestry for Deterministic API Workflows" by OpenAPI; src: https://github.com/OAI/Arazzo-Specification .. spec: https://spec.openapis.org/arazzo/latest.html (TIL by using this comment as a prompt))
People are losing loved ones to AI-fueled spiritual fantasies
Have we invited Wormwood to counsel us? To speak misdirected or even malignant advice that we readily absorb?
An LLM trained on all other science before Copernicus or Galileo would be expected to explain as true that the world is the flat center of the universe.
The idea that people in medieval times believed in a flat Earth is a myth that was invented in the 1800s. See https://en.wikipedia.org/wiki/Myth_of_the_flat_Earth for more.
Galileo Galilei: https://en.wikipedia.org/wiki/Galileo_Galilei :
> Galileo's championing of Copernican heliocentrism was met with opposition
... By the most published majority, whose texts would've been used to train science LLMs at the time back then.
And that most published majority believed in the Ptolemaic model. Which as https://en.wikipedia.org/wiki/Geocentric_model#Ptolemaic_mod... says:
> Ptolemy argued that the Earth was a sphere in the center of the universe...
Note. Spherical Earth. Not flat.
But could the Greeks sail?
Did ancient (Eastern?) Jacob's Staff surveying and navigation methods account for the curvature of the earth? https://www.google.com/search?q=Did%20ancient%20(Eastern%3F)... :
- History of geodesy: https://en.wikipedia.org/wiki/History_of_geodesy
FWIU Egyptian sails are Phoenician in origin.
The criticism being directed at this comment because in fact most pre-Copernican scholarship believed the world spherical is missing the point.
Musk's xAI in Memphis: 35 gas turbines, no air pollution permits
>methane gas turbines
>nitrogen oxides
Can someone explain to me how those produce nitrogen oxide? High school chem taught me CH4 + 2-O2 -> CO2 + 2-H2O... where's the nitrogen oxide?
The turbine produces high temperatures which then causes reactions with the nitrogen that's present in the atmosphere. There's also nitrogen present in the fuel. It's not 100% pure methane.
Does AGR Acidic Gas Reduction work with methane turbines?
Shouldn't all methane-powered equipment have this AGR (or similar) new emission reduction technology?
From https://www.ornl.gov/news/add-device-makes-home-furnaces-cle... :
> ORNL’s award-winning ultraclean condensing high-efficiency natural gas furnace features an affordable add-on technology that can remove more than 99.9% of acidic gases and other emissions. The technology can also be added to other natural gas-driven equipment.
FWIU basically no generators have catalytic converters, because that requires computer controlled fuel ignition.
...
FWIU, in data centers, "100% Green" means "100% offset by PPAs" (power-purchase agreement); so "200% green" could mean "100% directly-sourced clean energy".
Should they pack up and pay out and start over elsewhere with enough clean energy, [ or should force methane generators to comply? ]
It sounds like that's what they're adding for their permanent generators. Unfortunate that their portable "temporary" generators weren't equipped with this technology.
Though it's not a methane leak, in their case the problem is the poisonous byproducts of combustion FWIU;
Looks like methane.jpl.nasa.gov isn't up anymore? https://methane.jpl.nasa.gov/
We invested tax dollars in NASA space-based methane leak imaging, because methane is a potent greenhouse gas.
Is this another casualty of their distracting, wasteful, and saboteurial hit on NASA Earth Science funding?
I decided to pay off a school’s lunch debt
On the off chance you're interested in school lunches I highly recommend watching videos of Japanese school lunches on YouTube. There's a bunch out there now and if you were raised in the American system it will probably blow your mind. The idea that lunches can be freshly made, on site, out of healthy ingredients and children are active participants in serving and cleaning up is just crazy. When I encountered it for the first time I felt like a big part of my childhood had been sold to the lowest bidder.
> The idea that lunches can be freshly made, on site, out of healthy ingredients and children
Excellent garden path sentence.
A friend who's a pre-school teacher has this excellent t-shirt (I LMAO the first time I saw it):
let's eat, kids.
let's eat kids.
punctuation saves lives.
https://m.media-amazon.com/images/I/B1pppR4gVKL._CLa%7C2140%...
"Eats, Shoots & Leaves" https://en.wikipedia.org/wiki/Eats,_Shoots_%26_Leaves
A new hairlike electrode for long-term, high-quality EEG monitoring
This is actually an improvement on the lead electrode technology, making them smaller and improving the scalp adhesion for better fidelity. Ostensibly, you would still need an array of them for medical diagnoses, like isolating and/or monitoring a seizure to a particular portion of the brain.
Isn't that circular pad the electrode, and the "hair" just the lead which can be replaced by any copper wire?
Terrible headline. The single hair-like electrode outperforms the connection performance (longevity/signal to noise) of a single electrode from a 21-lead EEG.
It's not just the headline. "... a single electrode that looks just like a strand of hair and is more reliable than the standard, multi-electrode version." "The researchers tested the device’s long-term adhesion and electrical performance and compared it to the current, standard EEG using multiple electrodes."
I read the story three times and I'm still confused. But I'm sure you're right, and I think it's the author who's confused.
My second mental image was an alligator clip connected to a hair on a person’s head.
https://www.psu.edu/news/research/story/future-brain-activit...
Better link from Penn State. My reading of this seems to suggest that these electrodes are better than the standard one, NOT that one electrode is better than 24 leads.
OK, thanks, we’ve changed the URL to this from https://newatlas.com/medical-devices/3d-printed-hairlike-eeg....
[flagged]
That's not what an EEG is, and they have been around for decades.
Did they read and write brain signals with AI? What would the goal of that even be?
Presuming a higher bandwidth interface than what we currently have (voice/text chat).
Many Black Mirror episodes explore this.
The brain machine interface concept seems very useful. My question is the AI part. Aside from the machine learning likely needed to decode brain signals meaningfully at all, why would we want to hook up something with any resemblance to current AI to a brain directly?
The current AI stuff can easily be described as a breakthrough in natural language interfaces. A field engineers have been working on for many years. It easy to imagine that the methods used to develop current AI could be used for different types of interfaces we've been stuck on.
No
Nitrogen runoff leads to preventable health outcomes, experts say (2021)
/? Nitrogen: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... :
- "Discovery of nitrogen-fixing corn variety could reduce need for added fertilizer" (2018) https://news.ycombinator.com/item?id=17721741
I should have loved biology too
I should write a blog post entitled "I should have loved computer science"
Do you do bioinformatics?
Bioinformatics: https://en.wikipedia.org/wiki/Bioinformatics
Health informatics: https://en.wikipedia.org/wiki/Health_informatics
Genetic algorithm: https://en.wikipedia.org/wiki/Genetic_algorithm :
> Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation.
AP®/College Biology: https://www.khanacademy.org/science/ap-biology
AP®/College Biology > Unit 6: Gene Expression and Regulation > Lesson 6: Mutations: https://www.khanacademy.org/science/ap-biology/gene-expressi...
AP®/College Biology > Unit 7: Natural selection: https://www.khanacademy.org/science/ap-biology/natural-selec...
Rosalind.info has free CS algorithms applied bioinformatics exercises in Python; in a tree or a list; including genetic combinatorics. https://rosalind.info/problems/list-view/
FWICS there is not a "GA with code exercise" in the AP Bio or Rosalind curricula.
YouTube has videos of simulated humanoids learning to walk with mujoco and genetic algorithms that demonstrate goal-based genetic programming with Cost / Error / Fitness / Survival functions.
Mutating source code AST is a bit different from mutating to optimize a defined optimization problem with specific parameters; though the task is basically the same: minimize error between input and output, and then XAI.
Trump says Harvard will lose tax exempt status
What prevents them from instead justifying tax-exempt status as a faith-based nonprofit for their FSM-related charitable work, say?
I think that assumes that there's some fair evaluating of tax status going on.
Harvard defied Trump and that's why they're making this push. I don't think any given argument to the feds will change that math.
If you work for the feds, I think you know that if you were to follow a process and find that Harvard should retain their tax exempt status then you won't have a job long.
Show HN: Frecenfile – Instantly rank Git files by edit activity
frecenfile is a tiny CLI tool written in Rust that analyzes your Git commit history in order to identify "hot" or "trending" files using a frecency score that incorporates both the frequency and recency of edits.
It is fast enough to get an instant response in most repositories, and you will get a reasonably fast response in practically any repository.
It can be useful for getting a list of "recent"/important files when Git history is the only "usage" history you have available.
Felix86: Run x86-64 programs on RISC-V Linux
What remaining challenges are you most interested in solving for felix86—GPU driver support, full 32-bit compatibility, better Wine integration, or something else entirely?
The felix86 compatibility list also lists SuperTux and SuperTuxCart.
"lsteamclient: Add support for ARM64." https://github.com/ValveSoftware/Proton/commit/8ff40aad6ef00... .. https://news.ycombinator.com/item?id=43847860
/? box86: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
"New box86 v0.3.2 and Box64 v0.2.4 released – RISC-V and WoW64 support" (2023) https://news.ycombinator.com/item?id=37197074
/? box64 is:pr RISC-V is:closed: https://github.com/ptitSeb/box64/pulls?q=is%3Apr+risc-v+is%3...
Wikipedia says it will use AI, but not to replace human volunteers
>Scaling the onboarding of new Wikipedia volunteers
could be pretty helpful. I edit a bit and it's confusing in a number of ways, some I still haven't got the hang of. There's very little "thank you for your contribution but it needs a better source - why not this?" and usually just your work reverted without thanks or much explanation.
Could AI sift through removals and score as biased or vandalist?
And then what to do about "original research" that should've been moved to a different platform or better (also with community review) instead of being deleted?
Wikipedia:No_original_research: https://en.wikipedia.org/wiki/Wikipedia:No_original_research #Using_sources
I'm guessing it could advise about that even if it didn't make decisions.
Healthy soil is the hidden ingredient
I'm a gardening and landscaping enjoyer, but I am constantly confused about the bordering magical thinking surrounding dirt, among other aspects of growing things.
If you look at hydroponics/aeroponics, plants basically need water, light, and fertilizer (N (nitrogen) P (phosphorous) K (potassium), and a few trace minerals). It can be the most synthetic process you've ever seen, and the plants will grow amazingly well.
The other elements regarding soil health, etc, would be much better framed in another way, rather than as directly necessary for plant health. The benefits of maintaining a nice living soil is that it makes the environment self-sustaining. You could just dump synthetic fertilizer on the plant, with some soil additives to help retain the right amount of drainage/retention, and it would do completely fine. But without constant optimal inputs, the plants would die.
If you cultivate a nice soil, such that the plants own/surrounding detritus can be broken down effectively, such that the nutrients in the natural processes can be broken down and made available to the plant, and the otherwise nonoptimal soil texture characteristics could be brought to some positive characteristics by those same processes, then you can theoretically arrive at a point that requires very few additional inputs.
sure, we can make them grow well in a lab. but a natural system is so much simpler and elegant
Plants absorb nitrogen and CO2 from the air and store it in their roots; plants fertilize soil.
If you only grow plants with externally-sourced nutrients, that is neither sustainable nor permaculture.
Though it may be more efficient to grow without soil; soil depletion isn't prevented by production processes that do not generate topsoil.
JADAM is a system developed by a chemicals engineer given what is observed to work in JNF/KNF. https://news.ycombinator.com/item?id=38527264
Where do soil amendments come from, and what would deplete those stocks (with consideration for soil depletion)?
(Also, there are extremely efficient ammonia/nitrogen fertilizer generators, but still then the algae due to runoff problem. FWIU we should we asks ng farmers to Please produce granulated fertilizer instead of liquid.)
The new biofuel subsidies require no-till farming practices; which other countries are further along at implementing (in or to prevent or reverse soil depletion).
Tilling turns topsoil to dirt due to loss of moisture, oxidation, and solar radiation.
The vast majority of plants do not absorb nitrogen from the air. Legumes are the well-known exception.
I think that's why it's good to rotate beans or plant clover cover crop.
Three Sisters: Corn, Beans, Squash: https://en.wikipedia.org/wiki/Three_Sisters_(agriculture)
Companion planting: https://en.wikipedia.org/wiki/Companion_planting
Nitrogen fixation: https://en.wikipedia.org/wiki/Nitrogen_fixation
Most plants do not absorb atmospheric nitrogen, but need external nitride fertilizer to grow! That causes serious ground water polution!
> The new biofuel subsidies require no-till farming practices
This actually depletes soil of nitrogen!
Why do you believe that no-till farming practices deplete soil of nitrogen more than tilling?
A plausible hypothesis: tilling destroys the bacteria that get nitrogen to plant roots.
Isn't runoff erision the primary preventable source of nitrogen depletion?
FWIU residue mulch initially absorbs atmospheric nitrogen instead of the soil absorbing it, but that residue and its additional nitrogen eventually decays into the soil.
I have heard that it takes something like five years to successfully completely transform acreage with no-till; and then it's relatively soft and easily plantable and not impacted, so it absorbs and holds water.
No-till farmers are not lacking soil samples.
What would be a good test of total change in soil nitrogen content (and runoff) given no-till and legacy farming practices?
With pressure-injection seeders and laser weeders, how many fewer chemicals are necessary for pro farming?
String Types Considered Harmful
But a lack of string types (or tagged strings) results in injection vulnerabilities: OS, SQL, XSS (JS, CSS, HTML), XML, URI, query string,.
How should template autoescaping be implemented [in Zig without string types or type-tagged strings]?
E.g. Jinja2 implements autoescaping with MarkupSafe; strings wrapped in a Markup() type will not be autoescaped because they already have an .__html__() method.
MarkupSafe: https://pypi.org/project/MarkupSafe/
Some time ago, I started to create a project called "strypes" to teach or handle typed strings and escaping correctly.
"Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')" https://cwe.mitre.org/data/definitions/74.html
How to Write a Fast Matrix Multiplication from Scratch with Tensor Cores (2024)
Multiplication algorithm: https://en.wikipedia.org/wiki/Multiplication_algorithm
From https://news.ycombinator.com/item?id=40519828 re: LLMs and matrix multiplication with tensors:
> "You Need to Pay Better Attention" (2024) https://arxiv.org/abs/2403.01643 :
>> Our first contribution is Optimised Attention, which performs similarly to standard attention, but has 3/4 as many parameters and one matrix multiplication fewer per head. Next, we introduce Efficient Attention, which performs on par with standard attention with only 1/2 as many parameters as many parameters and two matrix multiplications fewer per head and is up to twice as fast as standard attention. Lastly, we introduce Super Attention, which surpasses standard attention by a significant margin in both vision and natural language processing tasks while having fewer parameters and matrix multiplications.
From "Transformer is a holographic associative memory" (2025) https://news.ycombinator.com/item?id=43029899 .. https://westurner.github.io/hnlog/#story-43028710 :
>>> Convolution is in fact multiplication in Fourier space (this is the convolution theorem [1]) which says that Fourier transforms convert convolutions to products.
From https://news.ycombinator.com/item?id=41322088 :
> "A carbon-nanotube-based tensor processing unit" (2024)
"Karatsuba Matrix Multiplication and Its Efficient Hardware Implementations" (2025) https://arxiv.org/abs/2501.08889 .. https://news.ycombinator.com/item?id=43372227
Google Search to redirect its country level TLDs to Google.com
I wonder if this is related to the first party cookie security model. That is supposedly why Google switched maps from maps.google.com to www.google.com/maps. Running everything off a single subdomain of a single root domain should allow better pooling of data.
Subdomains were chosen historically because it was the sane way to run different infrastructure for each service. Nowadays, with the globally distributed frontends that Google's Cloud offers, path routing and subdomain routing are mostly equivalent. Subdomains are archaic, they are exposing the architecture (separate services) to users out of necessity. I don't think cookies were the motivation but it's probably a nice benefit.
https://cloud.google.com/load-balancing/docs/url-map-concept...
Has anything changed about the risks of running everything with the same key, on the apex domain?
Why doesn't Google have DNSSEC.
To a first approximation, nobody has DNSSEC. It's not very good.
DNSSEC is necessary like GPG signatures are necessary; though also there are DoH/DoT/DoQ and HTTPS.
Google doesn't have DNSSEC because they've chosen not to implement it, FWIU.
/? DNSSEC deployment statistics: https://www.google.com/search?q=dnssec+deployment+statistics...
If not DNSSEC, then they should push another standard for signing DNS records (so that they are signed at rest (and encrypted in motion)).
Do DS records or multiple TLDs and x.509 certs prevent load balancing?
Were there multiple keys for a reason?
So, not remotely necessary at all? Neither DNSSEC nor GPG have any meaningful penetration in any problem domain. GPG is used for package signing in some language ecosystems, and, notoriously, those signatures are all busted (Python is the best example). They're both examples of failed 1990s cryptosystems.
Do you think the way Debian uses gpg signatures for package verification is also broken?
Red Hat too.
Containers, pip, and conda packages have TUF and now there's sigstore.dev and SLSA.dev. W3C Verifiable Credentials is the open web standard JSONLD RDF spec for signatures/attestations.
IDK how many reinventions of GPG there are.
Do all of these systems differ only in key distribution and key authorization, ceteris paribus?
The Chemistry Trick Poised to Slash Steel's Carbon Footprint
> Their process, which uses saltwater and iron oxide instead of carbon-heavy blast furnaces, has been optimized to work with naturally sourced materials. By identifying low-cost, porous iron oxides that dramatically boost efficiency, the team is laying the groundwork for large-scale, eco-friendly steel production. And with help from engineers and manufacturers, they’re pushing this green tech closer to the real world.
ScholarlyArticle: "Pathways to Electrochemical Ironmaking at Scale Via the Direct Reduction of Fe2O3" (2025) https://pubs.acs.org/doi/10.1021/acsenergylett.5c00166 https://doi.org/10.1021/acsenergylett.5c00166
Hypertext TV
thought this was going to be about things like https://en.wikipedia.org/wiki/Hybrid_Broadcast_Broadband_TV (hypertext on tv)
Same. Thanks; TIL about HbbTV: Hybrid Broadcast Broadband TV: https://en.wikipedia.org/wiki/Hybrid_Broadcast_Broadband_TV
Had been wondering how to add a game clock below the TV that syncs acceptably; a third screen: https://news.ycombinator.com/item?id=30890265
Show HN: A VS Code extension to visualise Rust logs in the context of your code
We made a VS Code extension [1] that lets you visualise logs and traces in the context of your code. It basically lets you recreate a debugger-like experience (with a call stack) from logs alone.
This saves you from browsing logs and trying to make sense of them outside the context of your code base.
We got this idea from endlessly browsing traces emitted by the tracing crate [3] in the Google Cloud Logging UI. We really wanted to see the logs in the context of the code that emitted them, rather than switching back-and-forth between logs and source code to make sense of what happened.
It's a prototype [2], but if you're interested, we’d love some feedback.
---
References:
[1]: VS Code: marketplace.visualstudio.com/items?itemName=hyperdrive-eng.traceback
[2]: Github: github.com/hyperdrive-eng/traceback
[3]: Crate: docs.rs/tracing/latest/tracing
Good idea!
This probably saves resources by eliminating need to re-run code to walk through error messages again.
Integration with time-travel debugging would even more useful; https://news.ycombinator.com/item?id=30779019
From https://news.ycombinator.com/item?id=31688180 :
> [ eBPF; Pixie, Sysdig, Falco, kubectl-capture,, stratoshark, ]
> Jaeger (Uber contributed to CNCF) supports OpenTracing, OpenTelemetry, and exporting stats for Prometheus.
From https://news.ycombinator.com/item?id=39421710 re: distributed tracing:
> W3C Trace Context v1: https://www.w3.org/TR/trace-context-1/#overview
Thanks for sharing all these links, super handy! I really appreciate it.
NP; improving QA feedback loops with IDE support is probably as useful as test coverage and test result metrics
/? vscode distributed tracing: https://www.google.com/search?q=vscode+distributed+tracing :
- jaegertracing/jaeger-vscode: https://github.com/jaegertracing/jaeger-vscode
/? line-based display of distributed tracing information in vs code: https://www.google.com/search?q=line-based%20display%20of%20... :
- sprkl personal observability platform: https://github.com/sprkl-dev/use-sprkl
Theoretically it should be possible to correlate deployed code changes with the logs and traces preceding 500 errors; and then recreate the failure condition given a sufficient clone of production (in CI) to isolate and verify the fix before deploying new code.
Practically then, each PR generates logs, traces, and metrics when tested in a test deployment and then in production. FWIU that's the "personal" part of sprkl.
Thanks for sharing, first time I hear about sprkl.dev
'Cosmic radio' detector could discover dark matter within 15 years
From https://news.ycombinator.com/item?id=42376759 :
> FWIU this Superfluid Quantum Gravity rejects dark matter and/or negative mass in favor of supervaucuous supervacuum, but I don't think it attempts to predict other phases and interactions like Dark fluid theory?
From https://news.ycombinator.com/item?id=42371946 :
> Dark fluid: https://en.wikipedia.org/wiki/Dark_fluid :
>> Dark fluid goes beyond dark matter and dark energy in that it predicts a continuous range of attractive and repulsive qualities under various matter density cases. Indeed, special cases of various other gravitational theories are reproduced by dark fluid, e.g. inflation, quintessence, k-essence, f(R), Generalized Einstein-Aether f(K), MOND, TeVeS, BSTV, etc. Dark fluid theory also suggests new models, such as a certain f(K+R) model that suggests interesting corrections to MOND that depend on redshift and density
High-voltage hydrogel electrolytes enable safe stretchable Li-ion batteries
Isolated Execution Environment for eBPF
From https://news.ycombinator.com/item?id=43553198 .. https://news.ycombinator.com/item?id=43564972 :
> Can [or should] a microkernel run eBPF? [or WASM?]
The performance benefits of running eBPF in the kernel are substantial and justifying, but how much should a kernel or a microkernel do?
Ask HN: Why is there no better protocol support for WiFi captive portals?
I'm curious why we still rely on hacky techniques like requesting captive.apple.com and waiting for interception, rather than having proper protocol-level support built into WPA. Why can't the WPA protocol simply announce that authentication requires a captive portal?
This seems like every public hotspot I connect to it's flakey and will sometimes report it's connected when it still requires captive portal auth. Or even when it does work it's a 15 second delay before the captive screen pops-up. Shouldn't this have been solved properly by now.
Does anyone have insight into the technical or historical reasons this remains so messy? If the wireless protocol could announce to the client thru some standard, that they have to complete auth via HTTP I feel the clients could implement much better experience.
Related issue: secured DNS must downgrade/fallback to unsecured DNS because of captive portal DNS redirection (because captive portals block access to DNS until the user logs in, and the user can't log into the captive portal without DNS redirection that is prevented by DoH, DoT, and DoQ).
Impact: if you set up someone's computer to use secured DNS only, and their device doesn't have per-SSID connection profiles, then they can't use captive portal hotspot Wi-Fi without knowing how to disable secured DNS.
"Do not downgrade to unsecured DNS unless it's an explicitly authorized captive portal"
IIRC there's a new-ish way to configure DNS-over-HTTPS over DHCP like there is for normal DNS.
Bilinear interpolation on a quadrilateral using Barycentric coordinates
/? Barycentric
From "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
Phase from second order amplitude FWIU
Universal photonic artificial intelligence acceleration
"Universal photonic artificial intelligence acceleration" (2025) https://www.nature.com/articles/s41586-025-08854-x :
> Abstract: [...] Here we introduce a photonic AI processor that executes advanced AI models, including ResNet [3] and BERT [20,21], along with the Atari deep reinforcement learning algorithm originally demonstrated by DeepMind [22]. This processor achieves near-electronic precision for many workloads, marking a notable entry for photonic computing into competition with established electronic AI accelerators [23] and an essential step towards developing post-transistor computing technologies.
Photon-based chips vs electron-based chips? It's interesting that photons and electrons are closely related: two neutral photons can become an electron-positron pair and vice versa.
The paper mentions that their photonic chip is less precise than an electronic one, but this looks like an advantage for AI. In fact, the stubborn precision of electron-based processors that erase the quantum nature of electrons is what I think is preventing the creation of real AI. In other words, if a microprocessor is effectively a deterministic complex mechanism, it won't become AI no matter what, but if the quantum nature is let loose, at least slightly, interesting things will start to happen.
There are certainly infinitely non-halting automata on deterministic electronic processors.
Abstractly, in terms of constructor theory, the non-halting task is independent from a constructor, which is implemented with a computation medium.
FWIU from reading a table on wikipedia about physical platforms for QC it would be possible to do quantum computing with just electron voltage but typical component quality.
So phase; certain phases.
And now parametic single photon emission and detection.
"Low-noise balanced homodyne detection with superconducting nanowire single-photon detectors" (2024) https://news.ycombinator.com/item?id=39537236
"A physical [photonic] qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929
"Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315 .. https://arxiv.org/abs/2302.05967 .. "Study of photons in QC reveals photon collisions in matter create vortices" https://news.ycombinator.com/item?id=40600736
"How do photons mediate both attraction and repulsion?" (2025) [as phonons in matter] https://news.ycombinator.com/item?id=42661511 notes re: recent findings with photons ("quanta")
Fedora change aims for 99% package reproducibility
This goal feels like a marketing OKR to me. A proper technical goal would be "all packages, except the ones that have a valid reason, such as signatures, not to be reproducible".
As someone who dabbles a bit in the RHEL world, IIRC all packages in Fedora are signed. In additional the DNF/Yum meta-data is also signed.
IIRC I don't think Debian packages themselves are signed themselves but the apt meta-data is signed.
I learned this from an ansible molecule test env setup script for use in containers and VMs years ago; because `which` isn't necessarily installed in containers for example:
type -p apt && (set -x; apt install -y debsums; debsums | grep -v 'OK$') || \
type -p rpm && rpm -Va # --verify --all
dnf reads .repo files from /etc/yum.repos.d/ [1] which have various gpg options; here's an /etc/yum.repos.d/fedora-updates.repo: [updates]
name=Fedora $releasever - $basearch - Updates
#baseurl=http://download.example/pub/fedora/linux/updates/$releasever/Everything/$basearch/
metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f$releasever&arch=$basearch
enabled=1
countme=1
repo_gpgcheck=0
type=rpm
gpgcheck=1
metadata_expire=6h
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$releasever-$basearch
skip_if_unavailable=False
From the dnf conf docs [1], there are actually even more per-repo gpg options: gpgkey
gpgkey_dns_verification
repo_gpgcheck
localpkg_gpgcheck
gpgcheck
1. https://dnf.readthedocs.io/en/latest/conf_ref.html#repo-opti...2. https://docs.ansible.com/ansible/latest/collections/ansible/... lists a gpgcakey parameter for the ansible.builtin.yum_repository module
For Debian, Ubuntu, Raspberry Pi OS and other dpkg .deb and apt distros:
man sources.list
man sources.list | grep -i keyring -C 10
# trusted:
# signed-by:
# /etc/apt/ trusted.gpg.d/
man apt-secure
man apt-key
apt-key help
less "$(type -p apt-key)"
signing-apt-repo-faq:
https://github.com/crystall1nedev/signing-apt-repo-faqFrom "New requirements for APT repository signing in 24.04" (2024) https://discourse.ubuntu.com/t/new-requirements-for-apt-repo... :
> In Ubuntu 24.04, APT will require repositories to be signed using one of the following public key algorithms: [ RSA with at least 2048-bit keys, Ed25519, Ed448 ]
> This has been made possible thanks to recent work in GnuPG 2.4 82 by Werner Koch to allow us to specify a “public key algorithm assertion” in APT when calling the gpgv tool for verifying repositories.
The Mutable OS: Why Isn't Windows Immutable in 2025?
Hey all—this is something I’ve been thinking about for a while in my day-to-day as a desktop support tech. We’ve made huge strides in OS security, but immutability is still seen as exotic, and I don’t think it should be. Curious to hear thoughts or counterpoints from folks who’ve wrestled with these same issues.
I'm working with rpm-ostree distros on workstations. The Universal Blue (Fedora Atomic (CoreOS)) project has OCI images that install as immutable host images.
We were able to install programs as admin on Windows in our university computer lab because of DeepFreeze, almost 20 years ago
"Is DeepFreeze worth it?" https://www.reddit.com/r/sysadmin/comments/18zn3jn/is_deepfr...
TIL Windows has UWF built-in:
"Unified Write Filter (UWF) feature" https://learn.microsoft.com/en-us/windows/configuration/unif...
Re: ~immutable NixOS and SELinux and Flatpaks' chroot filesystems not having SELinux labels like WSL2 either: https://news.ycombinator.com/item?id=43617363
Huh, I had no idea that UFW was a feature of Windows and I'm kind of surprised to not see more widespread adoption for workstation rollouts. DeepFreeze was great (excepting updates and other minor issues) and actively reduced a lot of nuisance issues that we might otherwise have to deal with when I worked for a school.
> On September 20, 2024, Microsoft announced that Windows Server Update Service would no longer be developed starting with Windows Server 2025.[4] Microsoft encourages business to adopt cloud-based solution for client and server updates, such as Windows Autopatch, Microsoft Intune, and Azure Update Manager. [5]
WSUS Offline installer is also deprecated now.
And then to keep userspace updated too, a package manager like Chocolatey NuGet and this power shell script: https://github.com/westurner/dotfiles/blob/develop/scripts/s...
Universal Blue immutable OCI images;
ublue-os/main: https://github.com/ublue-os/main :
> OCI base images of Fedora with batteries included
ublue-os/image-template: https://github.com/ublue-os/image-template :
> Build your own custom Universal Blue Image!
Microsoft took Torvalds, who also devs on Fedora FWIU.
systemd/particleos is an immutable Linux distribution built with mkosi:
"systemd ParticleOS" (2025) https://news.ycombinator.com/item?id=43649088
Immutable:
Idempotent:
Ansible is designed for idempotent tasks; that do not further change state if re-run.
Windows Containers are relatively immutable. Docker Desktop and Podman Desktop include a copy of k8s kubernetes and also kubectl IIRC
Do GUI apps run in Windows containers?
The MSIX apps can opt to run in a sandbox. It's not perfect, but it's _something_. Plus MSIX helps ensure clean install/uninstall as well as delta updates.
Again, not perfect, but serviceable.
fedora/toolbox and distrobox create containers that can access the X socket and /dev/dri/ to run GUI apps from containers.
Flatpaks share (GNOME,KDE,NVIDIA,podman,) runtimes, by comparison.
Re: MSIX https://news.ycombinator.com/item?id=23394302 :
> MSIX only enforces a sandbox if an application doesn’t elect to use the restricted capabilities that allow it to run without. File system and registry virtualization can be disabled quite easily with a few lines in the package manifest, as well as a host of other isolation features.
Flatseal and KDE and Gnome can modify per-flatpak permissions. IDK if there's a way to do per-flatpak-instance permissions, like containers.
MOUNT --type=cache
Man pages are great, man readers are the problem
I disagree. I have been writing man pages for a while, and mastering the language is hard. The documentation for both mdoc and mandb format is not covering the whole language, and the only remaining reference for roff itself seems to be the book by Brian Kernigham. mdoc and mandb are like a macro set on top of roff.
Just this week i considered proposing to $distro to convert all manpages to markdown as part of the build process, and then use a markdown renderer on the shipped system to display them. This would allow the distro to stop shipping *roff per default.
Markdown profits from the much larger amount of tooling. There are a ton of WYSIWYG editors that would allow non-technical users to write such documentation. I imagine we would all profit if creating manual pages was that easy.
On the other side, Markdown is even less formalized. Its like 15 different dialects from different programs that differ in their feature sets and parsing rules. I do not believe things like "How do i quote an asterisk or underscore so it is rendered verbatim" can be portably achieved in Markdown.
RST/Sphinx solves that problem in that there is a single canonical dialect, and it's already effectively used in many very large projects, including the Linux kernel and (obviously) the Python programming language.
Another big plus in my book is that rST has a much more well defined model for extensions than Markdown.
Running CERN httpd 3.0A from 1996 (2022)
A CVE from the year 2000: https://www.cve.org/CVERecord?id=CVE-2000-0079
Do SBOM tools identify web servers that old?
EngFlow Makes C++ Builds 21x Faster and Software a Lot Safer
Fixing the Introductory Statistics Curriculum
I took AP Stat in HS and then College Stats in college unnecessarily. In HS it was TI-83 calculators, and in college it was excel with optional minitab. R was new then
I remember ANOVA being the pinnacle of AP Stat. There are newer ANOVA-like statistical procedures like CANOVA;
"Efficient test for nonlinear dependence of two continuous variables" (2015) https://pmc.ncbi.nlm.nih.gov/articles/PMC4539721/
Estimators:
Yellowbrick implements the scikit-learn Estimator API: https://www.scikit-yb.org/en/latest/
"Developing scikit-learn estimators" https://scikit-learn.org/stable/developers/develop.html#esti... :
> All estimators implement the fit method:
estimator.fit(X, y)
> Out of all the methods that an estimator implements, fit is usually the one you want to implement yourself. Other methods such as set_params, get_params, etc. are implemented in BaseEstimator, which you should inherit from. You might need to inherit from more mixins, which we will explain later.https://scikit-learn.org/stable/modules/generated/sklearn.pi...
Sklearn Glossary > estimator: https://scikit-learn.org/stable/glossary.html#term-estimator
https://news.ycombinator.com/item?id=41311052
https://news.ycombinator.com/item?id=28523442
What is GridSearchCV and what are ways to find optimal faster than brute force grid search?
IRL case studies with applied stats and ML and AI:
- "ML and LLM system design: 500 case studies to learn from" https://news.ycombinator.com/item?id=43629360
Additional stats resources:
- "Seeing Theory" Brown https://seeing-theory.brown.edu/
- "AP®/College Statistics" Khan Academy https://www.khanacademy.org/math/ap-statistics
- "Think Stats: 3rd Edition" notebooks https://github.com/AllenDowney/ThinkStats/tree/v3
And physics and information theory in relation to stats as a field with many applications:
- "Information Theory: A Tutorial Introduction" (2019) https://arxiv.org/abs/1802.05968 .. https://g.co/kgs/sPha7qR
- Entropy > Statistical mechanics: https://en.wikipedia.org/wiki/Entropy#Statistical_mechanics
- Statistical mechanics: https://en.wikipedia.org/wiki/Statistical_mechanics
- Quantum Statistical mechanics: https://en.wikipedia.org/wiki/Quantum_statistical_mechanics
Statistical procedures are inferential procedures.
There are inductive, deductive, and abductive methods of inference.
Statistical inference: https://en.wikipedia.org/wiki/Statistical_inference
Statistical literacy: https://en.wikipedia.org/wiki/Statistical_literacy
Also, the ROC curve Wikipedia sidebar: https://en.wikipedia.org/wiki/Receiver_operating_characteris...
What is the difference between accuracy, precision, specificity, and sensitivity?
Khan Academy has curriculum alignment codes on their OER content, but not yet Schema.org/about and or :educationalAlignment for their https://schema.org/LearningResource (s)
A step towards life on Mars? Lichens survive Martian simulation in new study
"Ionizing radiation resilience: how metabolically active lichens endure exposure to the simulated Mars atmosphere" (2025) https://imafungus.pensoft.net/article/145477/
We've outsourced our confirmation biases to search engines
To write an unbiased systematic review, there are procedures.
In context to the scientific method, isn't RAG also wrong?
Generate a bad argument and then source support for it with RAG or by only searching for confirmation biased support.
I suppose I make this mistake too; I don't prepare systematic reviews, so my research meta-procedure has always been inadequate.
Usually I just [...] but a more scientific procedure would be [...].
Baby Steps into Genetic Programming
Genetic programming: https://en.wikipedia.org/wiki/Genetic_programming
Evolutionary computation > History: https://en.wikipedia.org/wiki/Evolutionary_computation #History
- "The sad state of property-based testing libraries" re coverage-guided fuzzing and other Hilbert spaces: https://news.ycombinator.com/item?id=40884466
- re: MOSES and Combo (a Lisp) and now Python too in re: "Show HN: Codemodder – A new codemod library for Java and Python" and libCST and AST and FST; https://news.ycombinator.com/item?id=39139198
- opencog/asmoses implements MOSES on AtomSpace, a hypergraph for algorithmic graph rewriting: https://github.com/opencog/asmoses
SELinux on NixOS
There's also the Fedora SELinux policy and the container-selinux policy set.
Rootless containers with podman lack labels like virtualenvs and NixOS.
Distrobox and toolbox set a number of options for rootless and regular containers;
--userns=keepid
--security-opt=label=disable
-v /tmp/path:/tmp/path:Z
-v /tmp/path:/tmp/path:z
--gpus all
"How Nix Works"
https://nixos.org/guides/how-nix-works/ :How Nix works: builds have a cache key that's a hash of all build parameters - /nix/store/<cachekey> - so that atomic upgrades and rollback work.
How NixOS works: config files are also snapshotted at a cache key for rollback and atomic upgrades that don't fail if interrupted mid-package-install.
> A big implication of the way that Nix/NixOS stores packages is that there is no /bin, /sbin, /lib, /usr, and so on. Instead all packages are kept in /nix/store. (The only exception is a symlink /bin/sh to Bash in the Nix store.) Not using ‘global’ directories such as /bin is what allows multiple versions of a package to coexist. Nix does have a /etc to keep system-wide configuration files, but most files in that directory are symlinks to generated files in /nix/store.
But which files should be assigned which extended filesystem attribute labels; signed packages only, local builds of [GPU drivers, out-of-tree modules,], not-yet-packaged things like ProtonGE;
I remembered hearing that Bazzite ships with Nix.
This [1] describes installing with the nix-determinate-installer [2] and then [3] installing home-manager :
nix run nixpkgs#home-manager -- switch --flake nix/#$USER
[1] https://universal-blue.discourse.group/t/issue-installing-ce...[2] https://github.com/DeterminateSystems/nix-installer
[3] https://nixos.wiki/wiki/Home_Manager
Working with rpm-ostree; upgrading the layered firefox RPM without a reboot requires -A/--apply-live (which runs twice) and upgrading the firefox flatpak doesn't require a reboot, but SELinux policies don't apply to flatpaks which run unconfined FWIU.
"SEC: Flatpak Chrome, Chromium, and Firefox run without SELinux confinement?" https://discussion.fedoraproject.org/t/sec-flatpak-chrome-ch...
From https://news.ycombinator.com/item?id=43564972 :
> Flatpaks bypass selinux and apparmor policies and run unconfined (on DAC but not MAC systems) because the path to the executable in the flatpaks differs from the system policy for /s?bin/* and so wouldn't be relabeled with the necessary extended filesystem attributes even on `restorecon /` (which runs on reboot if /.autorelabel exists).*
Having written a venv_relabel.sh script to copy selinux labels from /etc onto $VIRTUAL_ENV/etc , IDK; consistently-built packages are signed and maintained. The relevant commands from that script: https://github.com/westurner/dotfiles/blob/develop/scripts/v... :
sudo semanage fcontext --modify -e "${_equalpth}" "${_path}";
sudo restorecon -Rv "${_path}";
Is something like this necessary for every /nix/store/<key> directory? sudo semanage fcontext --modify -e "/etc" "${VIRTUAL_ENV}/etc";
sudo restorecon -Rv "${VIRTUAL_ENV}/etc";
Or is there a better way to support chroots with selinux with or without extending the existing SELinux functionality?Max severity RCE flaw discovered in widely used Apache Parquet
Does anyone know if pandas is affected? I serialize/deserialize dataframes which pandas uses parquet under the hood.
Pandas doesn't use the parquet python package under the hood: https://pandas.pydata.org/docs/reference/api/pandas.read_par...
> Parquet library to use. If ‘auto’, then the option io.parquet.engine is used. The default io.parquet.engine behavior is to try ‘pyarrow’, falling back to ‘fastparquet’ if ‘pyarrow’ is unavailable.
Those should be unaffected.
Python pickles have the same issue but it is a design decision per the docs.
Python docs > library > pickle: https://docs.python.org/3/library/pickle.html
Re: a hypothetical pickle parser protocol that doesn't eval code at parse time; "skipcode pickle protocol 6: "AI Supply Chain Attack: How Malicious Pickle Files Backdoor Models" .. "Insecurity and Python Pickles" : https://news.ycombinator.com/item?id=43426963
But python pickle is only supposed to be used with trusted input, so it’s not a vulnerability.
[deleted]
A university president makes a case against cowardice
[flagged]
Then they would need to tax nonprofit religious organizations too.
Why don't they just make the special interests pay their own multi-trillion dollar war bills instead of sabotaging US universities with surprise taxes?
If you increase expenses and cut revenue, what should you expect for your companies?
Why not just make a flat tax for everyone and end all the special interest pandering and exceptions for the rich. It is a poisonous misapplication of the time of our government to constantly be fiddling with tax code to favor one group or another.
Because a lot of people, including many economists, believe capital accumulating endlessly to the same class of thousand-ish people is bad. A flat income tax exacerbates wealth inequality considerably.
Our tax now is worse than flat. Warren buffet brags about paying less % than his secretary.
Either compare ideal tax structures with “no loopholes” (none of these exist in the real world) or compare actually-existing tax structures.
Comparing your ideal flat income tax with the current system is apples to oranges.
>>Why don't they just make the special interests pay their own multi-trillion dollar war bills instead of sabotaging US universities with surprise taxes?
>Either compare ideal tax structures with “no loopholes” (none of these exist in the real world) or compare actually-existing tax structures.
Hence I cannot compare your suggestion with the current system as it is apple to oranges because loopholes would exist.
My thesis is a flat tax would help to minimize the very loopholes you damn. The larger the tax code and the more it panders to particular interest, generally the more opportunity for 'loopholes.'
I don't want to work for a business created by, uh, upper class folks that wouldn't have done it if not for temporary tax breaks by a pandering grifter executive.
I believe in a strong middle class and upward mobility for all.
I don't think we want businesses that are dependent on war, hate, fear, and division for continued profitability.
I don't know whether a flat or a regressive or a progressive tax system is more fair or more total society optimal.
I suspect it is true that, Higher income individuals receive more total subsidies than lower-income individuals.
You don't want a job at a firm that an already-wealthy founder could only pull off due to short-term tax breaks and wouldn't have founded if taxes go any higher.
You want a job at a firm run by people who are going to keep solving for their mission regardless of high taxes due to immediately necessary war expenses, for example.
In the interests of long-term economic health and national security of the United States, I don't think they should be cutting science and medical research funding.
Science funding has positive returns. Science funding has greater returns than illegal wars (that still aren't paid for).
Find 1980 on these charts of tax receipts, GDP, and income inequality: https://news.ycombinator.com/item?id=43140500 :
> "Federal Receipts as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/FYFRGDA188S
> "Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
From https://news.ycombinator.com/item?id=43220833 re: income inequality:
> GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
Find 1980 on a GINI index chart.
Yeah, I mean, I think we agree on most points.
I think there’s too many confounding economic factors to look at GINI alone and conclude the 1980 turning point was caused by nerfing the top income tax bracket. But a compelling argument could probably be made with more supporting data, which of course this margin is too narrow to contain and etc.
Better would be to remove inheritance after death, instead distributing that wealth among the citizenship equally.
List of countries by inheritance tax rates: https://en.wikipedia.org/wiki/List_of_countries_by_inheritan...
InitWare, a portable systemd fork running on BSDs and Linux
Shoot. Almost there, at least for us cybersecurity-minded folks.
A need for a default-deny-all and then select what a process needs is the better security granularity.
This default-ALLOW-all is too problematic for today's (and future) security needs.
Cuts down on the compliance paperworks too.
DAC: Discretionary Access Control: https://en.wikipedia.org/wiki/Discretionary_access_control :
> The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control).
Which permissions and authorizations can be delegated?
DAC is the out of the box SELinux configuration for most Linux distros; some processes are confined, but if the process executable does not have the necessary extended filesystem attribute labels the process runs unconfined; default allow all.
You can see which processes are confined with SELinux contexts with `ps -Z`.
MAC is default deny all;
MAC: Mandatory Access Control: https://en.wikipedia.org/wiki/Mandatory_access_control
Biggest problem is the use of a SELinux compiler into components understood only by SELinux engine.
Does not help when the SELinux source text file is not buildable by function/procedure axiom: it is at its grittiest granularity, which ironically is the best kind of security, but only if composed by the most savviest SELinux system admins.
Often requires full knowledge of any static/dynamic libraries and any additional dynamic libraries it calls and its resource usages.
Additional frontend UI will be required to proactively determine suitability with those dynamic libraries before any ease of SELinux deployment.
For now, it is a trial and error in part on those intermediate system admins or younger.
From https://news.ycombinator.com/item?id=30025477 :
> [ audit2allow, https://stopdisablingselinux.com/ ]
Applications don't need to be compiled with selinux libraries unless they want to bypass CLI tools like chcon and restorecon (which set extended filesystem attributes according to the system policy; typically at package install time if the package provenance is sufficient) by linking with libselinux.
Deterministic remote entanglement using a chiral quantum interconnect
From https://news.ycombinator.com/item?id=43044159 :
>>> Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
ScholarlyArticle: "Deterministic remote entanglement using a chiral quantum interconnect" (2025) https://www.nature.com/articles/s41567-025-02811-1
The state of binary compatibility on Linux and how to address it
I don't understand why they don't just statically link their binaries. First, they said this:
> Even if you managed to statically link GLIBC—or used an alternative like musl—your application would be unable to load any dynamic libraries at runtime.
But then they immediately said they actually statically link all of their deps aside from libc.
> Instead, we take a different approach: statically linking everything we can.
If they're statically linking everything other than libc, then using musl or statically linking glibc will finish the job. Unless they have some need for loading share libs at runtime which they didn't already have linked into their binary (i.e. manual dlopen), this solves the portability problem on Linux.
What am I missing (assuming I know of the security implications of statically linked binaries -- which they didn't mention as a concern)?
And please, statically linking everything is NOT a solution -- the only reason I can run some games from 20 years ago still on my recent Linux is because they didn't decide to stupidly statically link everything, so I at least _can_ replace the libraries with hooks that make the games work with newer versions.
As long as the library is available.
Neither static nor dynamic linking is looking to solve the 20 year old binaries issue, so both will have different issues.
But I think it's easier for me to find a 20 year old ISO of a Red Hat/Slackware where I can simply run the statically linked binary. Dependency hell for older distros become really difficult when the older packages are not archived anywhere anymore.
It's interesting to think how a 20 year old OS plus one program is probably a smaller bundle size than many modern Electron apps ostensibly built "for cross platform compatibility". Maybe microkernels are the way.
How should a microkernel run (WASI) WASM runtimes?
Docker can run WASM runtimes, but I don't think podman or nerdctl can yet.
From https://news.ycombinator.com/item?id=38779803 :
docker run \
--runtime=io.containerd.wasmedge.v1 \
--platform=wasi/wasm \
secondstate/rust-example-hello
From https://news.ycombinator.com/item?id=41306658 :> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
Native containers run on the host and can host normal containers if a container engine is installed. Compared to an electron runtime, IDK how minimal a native container with systemd and podman, and WASM runtimes, and portable GUI rendering libraries could be.CoreOS - which was for creating minimal host images that host containers - is now Fedora Atomic is now Fedora Atomic Desktops and rpm-ostree. Silverblue, Kinoite, Sericea; and Bazzite and Secure Blue.
Secureblue has a hardened_malloc implementation.
From https://jangafx.com/insights/linux-binary-compatibility :
> To handle this correctly, each libc version would need a way to enumerate files across all other libc instances, including dynamically loaded ones, ensuring that every file is visited exactly once without forming cycles. This enumeration must also be thread-safe. Additionally, while enumeration is in progress, another libc could be dynamically loaded (e.g., via dlopen) on a separate thread, or a new file could be opened (e.g., a global constructor in a dynamically loaded library calling fopen).
FWIU, ROP Return-Oriented Programming and Gadgets approaches have implementations of things like dynamic header discovery of static and dynamic libraries at runtime; to compile more at runtime (which isn't safe, though: nothing reverifies what's mutated after loading the PE into process space, after NX tagging or not, before and after secure enclaves and LD_PRELOAD (which some go binaries don't respect, for example).
Can a microkernel do eBPF?
What about a RISC machine for WASM and WASI?
"Customasm – An assembler for custom, user-defined instruction sets" (2024) https://news.ycombinator.com/item?id=42717357
Maybe that would shrink some of these flatpaks which ship their own Electron runtimes instead of like the Gnome and KDE shared runtimes.
Python's manylinux project specifies a number of libc versions that manylinux packages portably target.
Manylinux requires a tool called auditwheel for Linux, delicate for MacOS, and delvewheel for windows;
Auditwheel > Overview: https://github.com/pypa/auditwheel#overview :
> auditwheel is a command line tool to facilitate the creation of Python wheel packages for Linux (containing pre-compiled binary extensions) that are compatible with a wide variety of Linux distributions, consistent with the PEP 600 manylinux_x_y, PEP 513 manylinux1, PEP 571 manylinux2010 and PEP 599 manylinux2014 platform tags.
> auditwheel show: shows external shared libraries that the wheel depends on (beyond the libraries included in the manylinux policies), and checks the extension modules for the use of versioned symbols that exceed the manylinux ABI.
> auditwheel repair: copies these external shared libraries into the wheel itself, and automatically modifies the appropriate RPATH entries such that these libraries will be picked up at runtime. This accomplishes a similar result as if the libraries had been statically linked without requiring changes to the build system. Packagers are advised that bundling, like static linking, may implicate copyright concerns
github/choosealicense.com: https://github.com/github/choosealicense.com
From https://news.ycombinator.com/item?id=42347468 :
> A manylinux_x_y wheel requires glibc>=x.y. A musllinux_x_y wheel requires musl libc>=x.y; per PEP 600
Return oriented programming: https://en.wikipedia.org/wiki/Return-oriented_programming
/? awesome return oriented programming sire:github.com https://www.google.com/search?q=awesome+return+oriented+prog...
This can probably find multiple versions of libc at runtime, too: https://github.com/0vercl0k/rp :
> rp++ is a fast C++ ROP gadget finder for PE/ELF/Mach-O x86/x64/ARM/ARM64 binaries.
> How should a microkernel run (WASI) WASM runtimes?
Same as any other kernel—the runtime is just a userspace program.
> Can a microkernel do eBPF?
If it implements it, why not?
Should a microkernel implement eBPF and WASM, or, for the same reasons that justify a microkernel should eBPF and most other things be confined or relegated or segregated in userspace; in terms of microkernel goals like separation of concerns and least privilege and then performance?
Linux containers have process isolation features that userspace sandboxes like bubblewrap and runtimes don't.
Flatpaks bypass selinux and apparmor policies and run unconfined (on DAC but not MAC systems) because the path to the executable in the flatpaks differs from the system policy for */s?bin/* and so wouldn't be relabeled with the necessary extended filesystem attributes even on `restorecon /` (which runs on reboot if /.autorelabel exists).
Thus, e.g. Firefox from a signed package in a container on the host, and Firefox from a package on the host are more process-isolated than Firefox in a Flatpak or from a curl'ed statically-linked binary because one couldn't figure out the build system.
Container-selinux, Kata containers, and GVisor further secure containers without requiring the RAM necessary for full VM virtualization with Xen or Qemu; and that is possible because of container interface standards.
Linux machines run ELF binaries, which could include WASM instructions
/? ELF binary WASM : https://www.google.com/search?q=elf+binary+wasm :
mewz-project/wasker https://github.com/mewz-project/wasker :
> What's new with Wasker is, Wasker generates an OS-independent ELF file where WASI calls from Wasm applications remain unresolved.*
> This unresolved feature allows Wasker's output ELF file to be linked with WASI implementations provided by various operating systems, enabling each OS to execute Wasm applications.
> Wasker empowers your favorite OS to serve as a Wasm runtime!
Why shouldn't we container2wasm everything? Because (rootless) Linux containers better isolate the workload than any current WASM runtime in userspace.
Non-Abelian Anyons and Non-Abelian Vortices in Topological Superconductors
"Non-Abelian Anyons and Non-Abelian Vortices in Topological Superconductors" (2023) https://arxiv.org/abs/2301.11614 :
> Abstract: Anyons are particles obeying statistics of neither bosons nor fermions. Non-Abelian anyons, whose exchanges are described by a non-Abelian group acting on a set of wave functions, are attracting a great attention because of possible applications to topological quantum computations. Braiding of non-Abelian anyons corresponds to quantum computations. The simplest non-Abelian anyons are Ising anyons which can be realized by Majorana fermions hosted by vortices or edges of topological superconductors, ν=5/2 quantum Hall states, spin liquids, and dense quark matter. While Ising anyons are insufficient for universal quantum computations, Fibonacci anyons present in ν=12/5 quantum Hall states can be used for universal quantum computations. Yang-Lee anyons are non-unitary counterparts of Fibonacci anyons. Another possibility of non-Abelian anyons (of bosonic origin) is given by vortex anyons, which are constructed from non-Abelian vortices supported by a non-Abelian first homotopy group, relevant for certain nematic liquid crystals, superfluid 3He, spinor Bose-Einstein condensates, and high density quark matter. Finally, there is a unique system admitting two types of non-Abelian anyons, Majorana fermions (Ising anyons) and non-Abelian vortex anyons. That is 3P2 superfluids (spin-triplet, p-wave paring of neutrons), expected to exist in neutron star interiors as the largest topological quantum matter in our universe.
> nematic liquid crystals,
- "Tunable entangled photon-pair generation in a liquid crystal" (2024) https://www.nature.com/articles/s41586-024-07543-5 .. https://news.ycombinator.com/item?id=40815388 .. SPDC entangled photon generation with nematic liquid crystal
NewsArticle: "A liquid crystal source of photon pairs" (2024) https://www.sciencedaily.com/releases/2024/06/240614141916.h...
Sadly, despite a stem PhD, I have no way of assessing whether this is an April Fools submission or not. I recognise some of the words in the abstract, like 'non-Abelian', but that's it.
Maybe it's time to retire.
It is not a joke. Topological anyons are the real deal, someday.
This is an article I found while preparing a comment about superfluid quantum gravity; https://www.reddit.com/r/AskPhysics/comments/1iqvxn0/comment...
I was collecting relevant articles and found this one.
TIL about the connection between superfluid quantum gravity, supersolid spin nematic crystals, and spin-nematic liquid crystals (LCDs,) for waveguiding entangled photons.
And also TIL about anyons in neutron stars, which are typically or always the impetus for black holes.
And also, hey, "3He" again.
Show HN: Offline SOS signaling+recovery app for disasters/wars
A couple of months ago, I built this app to help identify people stuck under rubble.
First responders have awesome tools. But in tough situations, even common folks need to help.
After what happened in Myanmar, we need something like this that works properly.
It has only been tested in controlled environments. It can also be improved; I know BLE is not _that_ effective under rubble.
If you have any feedback or can contribute, don't hold back.
> It can also be improved; I know BLE is not _that_ effective under rubble.
It's a tough problem to solve because you're up against the laws of physics and the very boring (and often counterintuitive) "Antenna Theory". Bluetooth is in the UHF band, and UHF isn't good for penetrating anything let a lone concrete rubble.
To penetrate rubble effectively you really want to be in the ELF-VLF bands, (That's what submarines/mining bots/underground seismic sensors use to get signals out).
Obviously that's ridiculous. Everything from ELF to even HF is impossible to use in a "under the rubble" situation because of physics[1]. Bluetooth (UHF) might be "better than nothing" but you're losing at least 25-30 dBs (which is like 99.99% signal) in 12 inches of concrete rubble. VHF (like a handheld radio) can buy you another 5 inches.
Honestly I think sound waves travel further in such medium than RF waves.
[1]: Your "standard reference dipole" antenna needs to be 1/2 or 1/4 your wave length to resonate. At ELF-VLF range you need an antenna that's 10k-1k feet long. You can play with inductors and loops to electrically lengthen your antenna without physically lengthening it, but you're not gonna get that below 500-200 feet. The length of a submarine is an important design consideration when deciding on what type of radio signal it needs to be able to receive/transmit vs how deep it needs to be for stealth.
What about muon imaging?
What about Rydberg sensors for VLF earth penetrating imaging, at least?
From "3D scene reconstruction in adverse weather conditions via Gaussian splatting" https://news.ycombinator.com/item?id=42900053 :
> Is it possible to see microwave ovens on the ground with a Rydberg antenna array for in-cockpit drone Remote ID signal location?
With a frequency range from below 1 KHz to multiple THz fwiu, Rydberg antennae can receive VLF but IDK about ELF.
IIRC there's actually also a way to transmit with or like Rydberg antennae now; probably with VLF if that's best for the application and there's not already enough backscatter to e.g. infer occlusion with? https://www.google.com/search?q=transmit+with+Rydberg+antenn....
This is pretty cool! I need to learn more about both.
I imagine such fancier tools would be less available among common folks, and more among first responders.
NASA already has the tech to detect heartbeats under rubble using radar [1]. No additional equipment is needed by the rescued. The problem is emergency response can get overwhelmed in large disasters.
If Rydberg sensors would be more common, and new tech is added to mobile devices, this can seriously shift the playing field.
I will look into this, because we need out of the box solutions. Thank you!
[1]: https://www.dhs.gov/archive/detecting-heartbeats-rubble-dhs-...
"The Dark Knight" (Batman, 2009) is the one with the phone-based - is it wifi backscatter imaging - and the societal concerns.
FWIU there are contractor-grade imaging capabilities and there are military-grade see through walls and earth capabilities that law enforcement also have but there challenges with due process.
At the right time of day, with the right ambient temperature, it's possible to see the studs in the walls with consumer IR but only at a distance.
Also, FWIU it's possible to find plastic in the ground - i.e. banned mines - with thermal imaging at certain times of day.
Could there be a transceiver on a post in the truck at the road, with other flying drones to measure backscatter and/or transceiver emissions?
Hopefully NASA or another solvent company builds a product out of their FINDER research (from JPL).
How many of such heartbeat detection devices were ever manufactured? Did they ever add a better directional mic; like one that can read heartbeats from hundreds of meters awat? Is it mountable on a motorized tripod?
It sounds like there is a signal separation challenge that applied onboard AI could help with.
From "Homemade AI drone software finds people when search and rescue teams can't" https://news.ycombinator.com/item?id=41764486#41775397 :
> {Code-and-Response, Call-for-Code}/DroneAid: https://github.com/Code-and-Response/DroneAid
> "DroneAid: A Symbol Language and ML model for indicating needs to drones, planes" (2010) https://github.com/Code-and-Response/DroneAid
> All but one of the DroneAid Symbol Language Symbols are drawn within upward pointing triangles.
> Is there a simpler set of QR codes for the ground that could be made with sticks or rocks or things the wind won't bend?
I am a solo founder developing this new social media platform
Hey everyone! I am a 19 y/o founder taking a gap year to build Airdel - a social media platform startup. I’ve noticed many of today's social media platforms aren't action-oriented nor optimized for real-time collaboration which is important for people who have personal goals, are working on impactful real-world missions and projects (Humanitarian events e.g California wildfire victims, Emergency healthcare deliveries, climbing Mt.everest, working on research/patents or even just building cool projects). That’s why I’m building a social media platform that connects your needs with people who can help— whether it’s industry stakeholders, professionals, or individuals with a common solution to your problem.
Airdel contains a feed where you can post your mission progress, updates, and achievements, an online directory where you can search for needs and missions to collaborate and partner with others on, and a professional working platform with productivity tools to collaborate on missions until completion. The platform tracks the entire life of your mission/project from start to finish which is recorded on your personal profile. Collaborators can be anonymously reviewed, allowing others to judge trustworthiness for future collaborations.
People who have signed up are in the humanitarian, healthcare, tech, science, business, entertainment, sports and creatives sector needing specific urgent solutions, connections, and resources. These span from NGOs, Institutions, Individuals, Suppliers, Sellers, Organizations and government.
Let me know what you guys think! I will be answering every one of your questions. Shoot!
Landing page: www.getairdel.com Interested in helping out? Contact me shannon@getairdel.com
Best, Shannon
From https://news.ycombinator.com/item?id=33302599 :
> If you label things with #GlobalGoal hashtags, others can find solutions to the very same problems.
The Global Goals are the UN Sustainable Development Goals for 2015-2030.
There are 17 Goals, 169 Targets and 247 Indicators.
There have been social media campaigns with hashtags, so there are already "hames" (hashtaggable names) for the high level goals.
Anyone can implement #hashtags, +tags, WikiWords, or similar. The twitter-text library is open source, for example.
Re: Schema.org/Mission, :Goal, :Objective [...] linked data: https://news.ycombinator.com/item?id=12525141
"Ask HN: Any well funded tech companies tackling big, meaningful problems?" https://news.ycombinator.com/item?id=24412493
From https://github.com/thegreenwebfoundation/carbon.txt/issues/3... re: a proposed carbon.txt:
> Is there a way for sites to sign a claim with e.g. ld-proofs and then have an e.g. an independent auditor - maybe also with a W3C DID Decentralized Identifier - sign to independently verify?
When an auditor confirms a sustainability report, they could sign it and award a signed blockcert.
(Other ideas considered for similar matching, recommendation, and expert-finding objectives: A StackExchange site for SDG Q&A with upvotes and downvotes, )
I saw that you're not interested in (syndicating content for inbound links with) AT protocol? Is it perceived development cost and estimation of inbound traffic?
What differentiates your offering from existing platforms like LinkedIn? How could your mission be achieved with existing solutions?
There is an SDG vocabulary to link problems and solutions with: https://unsceb.org/common-digital-identifiers-sdgs :
> The common identifiers for Sustainable Development Goals, Targets and Indicators are available online. The portal is provided by the UN Library, which also hosts the UN Bibliographic Information System (UN BIS) Taxonomy. The IRIs have also been mapped to the UN Environment SDG Interface Ontology (SDGIO, by UNEP) and to the UN Bibliographic Information System vocabulary, to enable the use of these resources seamlessly in linking documents and data from different sources to Goals, Targets, Indicators, and closely related concepts.
The UN SDG vocabulary: https://metadata.un.org/sdg/?lang=en
"A Knowledge Organization System for the United Nations Sustainable Development Goals" (2021) https://link.springer.com/chapter/10.1007/978-3-030-77385-4_...
Researchers get spiking neural behavior out of a pair of transistors
> Specifically, the researchers operate a transistor under what are called "punch-through conditions." This happens when charges build up in a semiconductor in a way that can allow bursts of current to cross through the transistor even when it's in the off state. Normally, this is considered a problem, so processors are made so that this doesn't occur. But the researchers recognized that a punch-through event would look a lot like the spike of a neuron's activity.
> The team found that, when set up to operate on the verge of punch-through mode, it was possible to use the gate voltage to control the charge build-up in the silicon, either shutting the device down or enabling the spikes of activity that mimic neurons. Adjustments to this voltage could allow different frequencies of spiking. Those adjustments could be made using spikes as well, essentially allowing spiking activity
> [...] All of this simply required standard transistors made with CMOS processes, so this is something that could potentially be put into practice fairly quickly.
ScholarlyArticle: "Synaptic and neural behaviours in a standard silicon transistor" (2025) https://www.nature.com/articles/s41586-025-08742-4
How do these compare to memristors?
Memristors: https://en.wikipedia.org/wiki/Memristor
From "A Chip Has Broken the Critical Barrier That Could Ultimately Begin the Singularity" (2025) https://www.aol.com/chip-broken-critical-barrier-could-17000... :
> Here we report an analogue computing platform based on a selector-less analogue memristor array. We use interfacial-type titanium oxide memristors with a gradual oxygen distribution that exhibit high reliability, high linearity, forming-free attribute and self-rectification. Our platform — which consists of a selector-less (one-memristor) 1 K (32 × 32) crossbar array, peripheral circuitry and digital controller — can run AI algorithms in the analogue domain by self-calibration without compensation operations or pretraining.
Can't these components model spreading activation?
Spreading activation: https://en.wikipedia.org/wiki/Spreading_activation
Memristor is cool, and it is all in one (including memory), but it is not standard CMOS process.
That is whole difference.
Any way, will be sort of CCD matrix to use any of these techs (may be something like modern Flash as storage, or DRAM cell with refresh), but CMOS is very straightforward to produce and to use.
Why I mention CCD - it is analog storage with multiple levels, organized as multiple lines with output line on one side. It could also be used as solid-state circular buffer to access separate cells.
So, these CMOS transistors will work as neuron, but weights will be stored as analog value in CCD.
Is that more debuggable?
Re: the Von Neumann bottleneck, debuggability, and I guess any form of computation in RAM; https://news.ycombinator.com/item?id=42312971
It seems like memristors have been n years away for quite awhile now; maybe like QC.
Wonder if these would work for spiking neural behavior with electronic transistors:
"Breakthrough in avalanche-based amorphization reduces data storage energy 1e-9" (2024) https://news.ycombinator.com/item?id=42318944
Cerebras WSE is probably the fastest RAM bus, though it's not really a bus it's just addressed multiple chips on the same wafer FWIU.
> Is that more debuggable?
I've seen many approaches to computing in my life - optical, mechanical, hydro, even pneumatic. Classic digital based on CMOS is the most universal with huge range of mature debugging instruments.
CMOS digital is so universal, it even worth to pay magnitudes worse power consumption before find best structure, and then, sure use something less debuggable, but with better consumption.
Unfortunately, I don't have enough data to state, which will be better on power, CMOS or memristor. Just now CMOS is mature COTS tech, but memristor is still few years from COTS.
Cerebras, as I know, based on digital CMOS. Just using some tricks to handle near whole wafer space. BTW, Sir Clive Sinclair tried similar approach to make wafer-scale storage, but unsuccessful.
> it's not really a bus it's just addressed multiple chips on the same wafer
I'm electronics engineer, and even have once baked one chip layer on semiconductor practice, so I'm aware about technologies.
As I said before on Sinclair, few companies tried to make new on semiconductor market, and even some have success.
RAM manufacturers for a long time using approach of make multi-chip on one wafer - most RAM chips actually have 4..6 RAMs in one package, but few of them don't pass tests and disabled by fuses, so appear chips with 2 or 4 RAMs enabled and even with odd number of enabled chips.
Looks like Cerebras use similar to RAM manufacturers approach, just for other niche.
Compiler Options Hardening Guide for C and C++
While all of these are very useful, you'll find that a lot of these are already enabled by default in many distributions of the gcc compiler. Sometimes they're embedded in the compiler itself through a patch or configure flag, and sometimes they're added through CFLAGS variables during the compilation of distribution packages. I can only really speak of gentoo, but here's a non-exhaustive list:
* -fPIE is enabled with --enable-default-pie in GCC's ./configure script
* -fstack-protector-strong is enabled with --enable-default-ssp in GCC's ./configure script
* -Wl,-z,relro is enabled with --enable-relro in Binutils' ./configure script
* -Wp,-D_FORTIFY_SOURCE=2, -fstack-clash-protection, -Wl,-z,now and -fcf-protection=full are enabled by default through patches to GCC in Gentoo.
* -Wl,--as-needed is enabled through the default LDFLAGS
For reference, here's the default compiler flags for a few other distributions. Note that these don't include GCC patches:
* Arch Linux: https://gitlab.archlinux.org/archlinux/packaging/packages/pa...
* Alpine Linux: https://gitlab.alpinelinux.org/alpine/abuild/-/blob/master/d...
* Debian: It's a tiny bit more obscure, but running `dpkg-buildflags` on a fresh container returns the following: CFLAGS=-g -O2 -Werror=implicit-function-declaration -ffile-prefix-map=/home/<myuser>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection
From https://news.ycombinator.com/item?id=38505448 :
> There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).
Is there a good reference for comparing these compile-time build flags and their defaults with Make, CMake, Ninja Build, and other build systems, on each platform and architecture?
From https://news.ycombinator.com/item?id=41306658 :
> From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :
>> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
But those are per-arch performance flags, not security flags.
In my experience distributions only patch GCC or modify the package building environment variables to add compiler flags. You can be certain that the compiler flags used in build systems like cmake and meson will be vanilla.
Make adds no additional compiler flags (check the output of "make -n -p"). Neither does Ninja.
Autotools is extremely conservative with compiler flags and will only really add -O2 -g, as well as include paths and defines specified by the developer.
CMake has some default compiler flags, depending on your CMAKE_BUILD_TYPE, mostly affecting optimization, and disabling asserts() with Release (-DNDEBUG). It also has some helpers for precompiled headers and link-time optimizations that enable the relevant flags.
Meson uses practically the same flags as cmake, with the exception of not passing -DNDEBUG unless the developer of the meson build really wants it to.
These are all the relevant build systems for linux packages. I'm not familiar with gn, bazel, and etc. In general, build systems dabble a bit in optimization flags, but pay no mind to hardening.
Show HN: Make SVGs interactive in React with 1 line
Hey HN
I built svggles (npm: interactive-illustrations), a React utility that makes it easy to add playful, interactive SVGs to your frontend.
It supports mouse-tracking, scroll, hover, and other common interactions, and it's designed to be lightweight and intuitive for React devs.
The inspiration came from my time playing with p5.js — I loved how expressive and fun it was to create interactive visuals. But I also wanted to bring that kind of creative freedom to everyday frontend work, in a way that fits naturally into the React ecosystem.
My goal is to help frontend developers make their UIs feel more alive — not just functional, but fun. I also know creativity thrives in community, so it's open source and I’d love to see contributions from artists, developers, or anyone interested in visual interaction.
Links: Website + Docs: svggles.vercel.app
GitHub: github.com/shantinghou/interactive-illustrations
NPM: interactive-illustrations
Let me know what you think — ideas, feedback, and contributions are all welcome
MDN docs > SVG and CSS: https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorials/S...
Python lock files have officially been standardized
What does this mean for pip-tools' requirements.in, Pipfile.lock, pip constraints.txt, Poetry.lock, pyroject.toml, and uv.lock?
I think the plan is to replace all of those.
The PEP has buy in from all the major tools.
It doesn't mention them in alphabetical order.
From last week re: GitHub Dependabot and conda/mamba/micromamba/pixi support:
"github dependabot, meta.yaml, environment.yml, conda-lock.yaml, pixi.lock" https://github.com/regro/cf-scripts/issues/3920#issuecomment... https://github.com/dependabot/dependabot-core/issues/2227#is... incl. links to the source of dependabot
Quantum advantage for learning shallow NNs with natural data distributions
ScholarlyArticle: "Quantum advantage for learning shallow neural networks with natural data distributions" (2025) https://arxiv.org/abs/2503.20879
NewsArticle: "Google Researchers Say Quantum Theory Suggests a Shortcut for Learning Certain Neural Networks" (2025) https://thequantuminsider.com/2025/03/31/google-researchers-... :
> Using this model, [Quantum Statistical Query (QSQ) learning,] the authors design a two-part algorithm. First, the quantum algorithm finds the hidden period in the function using a modified form of quantum Fourier transform — a core capability of quantum computers. This step identifies the unknown weight vector that defines the periodic neuron. In the second part, it applies classical gradient descent to learn the remaining parameters of the cosine combination. The algorithm is shown to require only a polynomial number of steps, compared to the exponential cost for classical learners. [...]
> The researchers carefully address several technical challenges. For one, real-valued data must be discretized into digital form to use in a quantum computer.
Quantum embedding:
> Another way to put this: real-world numbers must be converted into digital chunks so a quantum computer can process them. But naive discretization can lose the periodic structure, making it impossible to detect the right signal. The authors solve this by designing a pseudoperiodic discretization. This approximates the period well enough for quantum algorithms to detect it.
> They also adapt an algorithm from quantum number theory called Hallgren’s algorithm to detect non-integer periods in the data. While Hallgren’s method originally worked only for uniform distributions, the authors generalize it to work with “sufficiently flat” non-uniform distributions like Gaussians and logistics, as long as the variance is large enough.
There is not yet a Wikipedia article on (methods of) "Quantum embedding".
How many qubits are necessary to roll 2, 8, or 6-sided quantum dice?
Embedding (mathematics) https://en.wikipedia.org/wiki/Embedding
Embedding (machine learning) https://en.wikipedia.org/wiki/Embedding_(machine_learning)
/? quantum embedding review: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=qua...
Continuous embedding: https://en.wikipedia.org/wiki/Continuous_embedding
Continuous-variable quantum information; https://en.wikipedia.org/wiki/Continuous-variable_quantum_in...
Symmetry between up and down quarks is more broken than expected
BTW isospin is actually how many up vs down quarks they are. It's not a fundamental property like spin or charge.
It's an old term that was created before they knew that up and down quarks existed.
Personally I find the term outdated because there are 4 other quarks, and isospin only talks about two of them.
Isospin symmetry: https://en.wikipedia.org/wiki/Isospin #History :
> Isospin is also known as isobaric spin or isotopic spin.
Supersymmetry: https://en.wikipedia.org/wiki/Supersymmetry
Does the observed isospin asymmetry disprove supersymmetry, if isospin symmetry is an approximate symmetry?
Stochastic Reservoir Computers
"Stochastic reservoir computers" (2025) https://www.nature.com/articles/s41467-025-58349-6 :
> Abstract: [...] This allows the number of readouts to scale exponentially with the size of the reservoir hardware, offering the advantage of compact device size. We prove that classes of stochastic echo state networks form universal approximating classes. We also investigate the performance of two practical examples in classification and chaotic time series prediction. While shot noise is a limiting factor, we show significantly improved performance compared to a deterministic reservoir computer with similar hardware when noise effects are small.
Glutamate Unlocks Brain Cell Channels to Enable Thinking and Learning
ScholarlyArticle: "Glutamate gating of AMPA-subtype iGluRs at physiological temperatures" (2025) https://www.nature.com/articles/s41586-025-08770-0 ; CryoEM
From https://news.ycombinator.com/item?id=39836127 :
> High glutamine-to-glutamate ratio predicts the ability to sustain motivation [...]
> MSG [...]
> So, IIUC, when you feed LAB glutamate, you get GABA?
Pixar One Thirty
Tessellation #History, #Overview: https://en.wikipedia.org/wiki/Tessellation :
> More formally, a tessellation or tiling is a cover of the Euclidean plane by a countable number of closed sets, called tiles, such that the tiles intersect only on their boundaries.
Category:Tileable textures: https://commons.wikimedia.org/wiki/Category:Tileable_texture...
Can PxrRoundCube be called from RenderManForBlender, with BlenderMCP? https://github.com/prman-pixar/RenderManForBlender
Texture synthesis > Methods: https://en.wikipedia.org/wiki/Texture_synthesis#Methods
Khan Academy > Computing > Pixar in a Box > Unit 8: Patterns: https://www.khanacademy.org/computing/pixar/pattern
Matrix Calculus (For Machine Learning and Beyond)
> The class involved numerous example numerical computations using the Julia language, which you can install on your own computer following these instructions. The material for this class is also located on GitHub at https://github.com/mitmath/matrixcalc
Mathematical Compact Models of Advanced Transistors [pdf]
This is from 2018, anyone in the field know if it's still state of the art or a historic curiosity? I know that we've started using euv since then which seems like it would change things.
State of the art in transistors?
- "Researchers get spiking neural behavior out of a pair of [CMOS] transistors" (2025) https://news.ycombinator.com/item?id=43503644
- Memristors
- Graphene-based transistors
EUV and nanolithography?
SOTA alternatives to EUV for nanolithography include NIL nanoimprint lithography (at 10-14nm at present fwiu), nanoassembly methods like atomic/molecular deposition and optical tweezers, and a new DUV solid-state laser light source at 193nm.
How to report a security issue in an open source project
Security.txt is a standard for sharing vuln disclosure information; /.well-known/security.txt or /security.txt .
security.txt: https://en.wikipedia.org/wiki/Security.txt
Responsible disclosure -> CVD: Coordinated Vulnerability Disclosure: https://en.wikipedia.org/wiki/Coordinated_vulnerability_disc...
OWASP Vulnerability Disclosure Cheat Sheet: https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability...
Publishers trial paying peer reviewers – what did they find?
> USD $250
How much deep research does $250 yield by comparison?
Knowledge market > Examples; Google Answers, Yahoo Answers, https://en.wikipedia.org/wiki/Knowledge_market#Examples
I'm not sure why one would compare reviews by acknowledged experts in a field with stuff written by anonymous randos, and it seems highly unlikely that anyone with the appropriate qualifications would be lurking on some mechanical turk-like site.
I'm also deeply suspicious of the confidentiality of anything sent to one of those sites.
However this does suggest the idea that a high-powered university in a low-income country might be able to cut a deal to provide reviewing services...
You can get 50 reviews on Fiverr for that price!
Tracing the thoughts of a large language model
XAI: Explainable artificial intelligence: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Inside arXiv–The Most Transformative Platform in All of Science
ArXiv: https://en.wikipedia.org/wiki/ArXiv
ArXiv accepts .ps (PostScript), .tex (LaTeX source), and .pdf (PDF) ScholarlyArticle uploads.
ArXiv docs > Formats for text of submission: https://info.arxiv.org/help/submit/index.html#formats-for-te...
The internet and the web are the most transformative platforms in all of science, though.
Chimpanzees act as 'engineers', choosing materials to make tools
Tool use by non-humans > Primates > Chimpanzees and bonobos: https://en.wikipedia.org/wiki/Tool_use_by_non-humans#Chimpan...
Accessible open textbooks in math-heavy disciplines
"BookML: automated LaTeX to bookdown-style HTML and SCORM, powered by LaTeXML" https://vlmantova.github.io/bookml/
LaTeXML: https://en.wikipedia.org/wiki/LaTeXML :
LaTeXML emits XML from a parsing of LaTex with Perl.
SCORM is a standard for educational content in ZIP packages which is supported by Moodle, ILIAS, Sakai, Canvas, and a number of other LMS Learning Management Systems.
SCORM: https://en.wikipedia.org/wiki/Sharable_Content_Object_Refere...
xAPI (aka Experience API, aka TinCan API) is a successor spec to SCORM for event messages to LRS Learning Record Stores. Like SCORM, xAPI was granted by ADL.
re: xAPI, schema.org/Action, and JSON-LD: https://github.com/RusticiSoftware/TinCanSchema/issues/7
schema.org/Action describes potential actions: https://schema.org/docs/actions.html
For example, from the Schema.org "Potential Actions" doc: https://schema.org/docs/actions.html :
{
"@context": "https://schema.org",
"@type": "Movie",
"name": "Footloose",
"potentialAction": {
"@type": "WatchAction"
}
}
That could be a syllabus.ActionTypes include: BuyAction, AssessAction > ReviewAction,
Schema.org > "Full schema hierarchy" > [Open hierarchy] > Action and rdfs:subClassOf subclasses thereof: https://schema.org/docs/full.html
What Linked Data should [math textbook] publishing software include when generating HTML for the web?
https://schema.org/CreativeWork > Book, Audiobook, Article > ScholarlyArticle, Guide, HowTo, Blog, MathSolver
The schema.org Thing > CreativeWork LearningResource RDFS class has the :assesses, :competencyRequired, :educationalLevel, :educationalAlignment, and :teaches RDFS properties; https://schema.org/LearningResource
You can add bibliographic metadata and curricular Linked Data to [OER LearningResource] HTML with schema.org classes and properties as JSON-LD, RDFa, or Microdata.
The schema.org/about property has a domain which includes CreativeWork and a range which includes Thing, so a :CreativeWork is :about a :Thing which could be a subclass of :CreativeWork.
.
I work with MathJax and LaTeX in notebooks a bit, and have generated LaTeX and then PDF with Sphinx and texlive like the ReadTheDocs docker container which already has the multiple necessary GB of LaTeX installed to render a README.rst as PDF without pandoc:
The Jupyter Book docs now describe how that works.
Jupyter Book docs > Customize LaTeX via Sphinx: https://jupyterbook.org/en/stable/advanced/pdf.html#customiz...
How to build the docs with the readthedocs docker image onesself: https://github.com/jupyter-book/jupyter-book/issues/991
ReadTheDocs > Dev > Design > Build Images > Time required to install languages at build time [with different package managers with varying performance] https://docs.readthedocs.com/dev/latest/design/build-images....
The jupyter-docker-stacks, binderhub, and condaforge/miniforge3 images build with micromamba now IIRC.
condaforge/miniforge3: https://hub.docker.com/r/condaforge/miniforge3
Recently, I've gotten into .devcontainers/devcontainers.json; which allows use of one's own Dockerfile or a preexisting docker image and installs LSP and vscode on top, and then runs the onCreateCommand, postStartCommand
A number of tools support devcontainer.json: https://containers.dev/supporting
Devcontainers could be useful for open textbooks in math-heavy disciplines; so that others can work within, rebuild, and upgrade the same container env used to build the textbook.
Re: MathJax, LaTeX, and notebooks:
To left-align a LaTeX expression in a (Jupyter,Colab,VScode,) notebook wrap the expression with single dollar signs. To center-align a LaTeX expression in a notebook, wrap it with double dollar signs:
$ \alpha_{\beta_1} $
$$ \alpha_{\beta_2} $$
Textbooks, though? Interactive is what they want.How can we make textbooks interactive?
It used to be that textbooks were to be copied down from; copy by hand from the textbook.
To engage and entertain this generation.
ManimCE, scriptable 3d simulators with test assertions, Thebelab,
Jupyter Book docs > "Launch into interactive computing interfaces" > BinderHub ( https://mybinder.org ), JupyterHub, Colab, Deepnote: https://jupyterbook.org/en/stable/interactive/launchbuttons....
JupyterLite-xeus builds a jupyterlite static site from an environment.yml; such that e.g. the xeus-python kernel and other packages are compiled to WebAssembly (WASM) so that you can run Jupyter notebooks in a browser without a server:
repo2jupyterlite works like repo2docker, which powers BinderHub, which generates a container with a current version of Jupyter installed after building the container according to one or more software dependency requirement specification files in /.binder or the root of the repo.
repo2jupyter: https://github.com/jupyterlite/repo2jupyterlite
jupyterlite-xeus: https://jupyterlite-xeus.readthedocs.io/en/latest/
Getting hit by lightning is good for some tropical trees
Also re: lightning and living things, from https://news.ycombinator.com/item?id=43044159 :
> "Gamma radiation is produced in large tropical thunderstorms" (2024)
> "Gamma rays convert CH4 to complex organic molecules [like glycine,], may explain origin of life" (2024)
Harnessing Quantum Computing for Certified Randomness
ScholarlyArticle: "Certified randomness using a trapped-ion quantum processor" (2025) https://www.nature.com/articles/s41586-025-08737-1
Re: different - probably much less expensive - approaches to RNG;
From "Cloudflare: New source of randomness just dropped" (2005) https://news.ycombinator.com/item?id=43321797 :
> Another source of random entropy better than a wall of lava lamps:
>>> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
100Gbit/s is faster than qualifying noise from a 56 qubit quantum computer?
>> google/paranoid_crypto.lib.randomness_tests
There's yet no USB or optical interconnect RNG based on quantum vacuum fluctuations, though.
Building and deploying a custom site using GitHub Actions and GitHub Pages
I just set up an apt repo of uv packages using GitHub Pages via Actions and wrote up some notes here: https://linsomniac.com/post/2025-03-18-building_and_publishi...
Looks like Simon and I were working on very similar things simultaneously.
There should be a "Verify signature of checksums and Verify checksums" at "Download latest uv binary".
GitHub has package repos; "GitHub Packages" for a few package formats, and OCI artifacts. https://docs.github.com/en/packages/working-with-a-github-pa...
OCI Artifacts with labels include signed Container images and signed Packages.
Packages can be hosted as OCI image repository artifacts with signatures; but package managers don't natively support OCI image stores, though many can install packages hosted at GitHub Packages URLs which are referenced by a signed package repository Manifest on a GitHub Pages or GitLab Pages site (e.g. with CF DNS and/or CloudFlare Pages in front)
>There should be a "Verify signature of checksums and Verify checksums"
I get it, but verifying checksums of downloads via https from github releases, downloaded across github's architecture (admittedly Azure) to githubs's runners seems like overkill. Especially as I don't see any signatures of the checksums, so the checksums are being retrieved via that same infrastructure with the same security guarantees. What am I missing?
If downloading over https and installing were enough, we wouldn't need SLSA, TUF, or sigstore.
.deb and .rpm Package managers typically reference a .tar.gz with a checksum; commit that to git signed with a GPG key; and then the package is signed by a (reproducible) CI build with the signing key for that repo's packages.
conda-forge can automatically send a Pull Request to update the upstream archive URL and cached checksum in the package manifest when there is a new upstream package.
Actually, the conda-forge bot does more than that;
From https://github.com/regro/cf-scripts/issues/3920#issuecomment... re: (now github) dependabot maybe someday scanning conda environment.yml, conda-lock.yml, and/or feedstock meta.yml:
> So the bot here does a fair bit more than update dependencies and/or versions.
> We have globally pinned ABIs which have to migrated in a specific order. We also use the bot here to start the migrations, produce progress / error outputs on the status page, and then close the migrations. The bot here also does specific migrations of dependencies that have been renamed, recipe maintenance, and more.
Dependabot scans for vulnerable software package versions in software dependency specification documents for a number of languages and package formats, when you git push to a repo which has dependabot configured in their dependabot.yml:
From the dependabot.yml docs: https://docs.github.com/en/code-security/dependabot/working-... :
> Define one package-ecosystem element for each package manager that you want Dependabot to monitor for new versions
Dependabot supports e.g. npm, pip, gomod, cargo, docker, github-actions, devcontainers,
SBOM tools have additional methods of determining which packages are installed on a specific server or container instance.
Static sites are sometimes build and forget projects that also need regular review of the statically-compiled-in dependencies for known vulns.
E.g. jupyterlite-xeus builds WASM static sites from environment.yml; though Hugo is probably much faster.
AI Supply Chain Attack: How Malicious Pickle Files Backdoor Models
From "Insecurity and Python Pickles" (2024) https://news.ycombinator.com/item?id=39685128 :
> There should be a data-only pickle serialization protocol (that won't serialize or deserialize code).
> How much work would it be to create a pickle protocol that does not exec or eval code?
"Title: Pickle protocol version 6: skipcode pickles" https://discuss.python.org/t/create-a-new-pickle-protocol-ve...
I have to agree with Chris Angelico there:
> Then the obvious question is: Why? Why use pickle? The most likely answer is “because <X> can’t represent what I need to transmit”, but for that to be at all useful to your proposal, you need to show examples that won’t work in well-known safe serializers.
Code in packages should be signed.
Code in pickles should also be signed.
I have no need for the pickle module now, but years ago thought there might have been safer way to read data that was already in pickles.
For backwards compatibility, skipcode=False must be the default,
were someone to implement a pickle str parser that doesn't eval code.
JS/ES/TS Map doesn't map to JSON.
Pickle still is good for custom objects (JSON loses methods and also order), Graphs & circular refs (JSON breaks), Functions & lambdas (Essential for ML & distributed systems) and is provided out of box.
We're contemplating protocols that don't evaluate or run code; that rules out serializing functions or lambdas (i.e., code).
Custom objects in Python don't have "order" unless they're using `__slots__` - in which case the application already knows what they are from its own class definition. Similarly, methods don't need to be serialized.
A general graph is isomorphic to a sequence of nodes plus a sequence of vertex definitions. You only need your own lightweight protocol on top.
Because globals(), locals(), Classes and classInstances are backed by dicts, and dicts are insertion ordered in CPython since 3.6 (and in the Python spec since 3.7), object attributes are effectively ordered in Python.
Object instances with __slots__ do not have a dict of attributes.
__slots__ attributes of Python classes are ordered, too.
(Sorting and order; Python 3 objects must define at least __eq__ and __lt__ in order to be sorted. @functools.total_ordering https://docs.python.org/3/library/functools.html#functools.t... )
Are graphs isomorphic if their nodes and edges are in a different sequence?
assert dict(a=1, b=2) == dict(b=2, a=1)
from collections import OrderedDict as odict
assert dict(a=1, b=2) != dict(b=2, a=1)
To crytographically sign RDF in any format (XML, JSON, JSON-LD, RDFa), a canonicalization algorithm is applied to normalize the input data prior to hashing and cryptographically signing. Like Merkle hashes of tree branches, a cryptographic signature of a normalized graph is a substitute for more complete tests of isomorphism.RDF Dataset Canonicalization algorithm: https://w3c-ccg.github.io/rdf-dataset-canonicalization/spec/...
Also, pickle stores the class name to unpickle data into as a (variously-dotted) str. If the version of the object class is not in the class name, pickle will unpickle data from appA.Pickleable into appB.Pickleable (or PickleableV1 into PickleableV2 objects, as long as PickleableV2=PickleableV1 is specified in the deserializer).
So do methods need to be pickled? No for security. Yes because otherwise the appB unpickled data is not isomorphic with the pickled appA.Pickleable class instances.
One Solution: add a version attribute on each object, store it with every object, and discard it before testing equality by other attributes.
Another solution: include the source object version in the class name that gets stored with every pickled object instance, and try hard to make sure the dest object is the same.
Experimental test of the nonlocal energy alteration between two quantum memories
ScholarlyArticle: "Test of Nonlocal Energy Alteration between Two Quantum Memories" (2025) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
Notebooks as reusable Python programs
i wanted to like marimo, but the best notebook interface i've tried so far is vscode's interactive window [0]. the important thing is that it's a python file first, but you can divide up the code into cells to run in the jupyter kernel either all at once or interactively.
0: https://code.visualstudio.com/docs/python/jupyter-support-py
Spyder also has these, possibly for longer than vscode [0]. I don't know who had this idea first but I remember some vim plugins doing that long ago, so maybe the vim community?
[0] https://docs.spyder-ide.org/current/panes/editor.html#code-c...
Jupytext docs > The percent format: https://github.com/mwouts/jupytext/blob/main/docs/formats-sc... :
# %% [markdown]
# Another Markdown cell
# %%
# This is a code cell
class A():
def one():
return 1
# %% Optional title [cell type] key="value"
MyST Markdown has: https://mystmd.org/guide/notebooks-with-markdown : ```{code-cell} LANGUAGE
:key: value
CODE TO BE EXECUTED
```
And : ---
kernelspec:
name: javascript
display_name: JavaScript
---
# Another markdown cell
```{code-cell} javascript
// This is a code cell
console.log("hello javascript kernel");
```
But also does not store the outputs in the markdown.Thanks, this is a good read. I did not know MyST, it is very cool.
The vim plugin I was talking about was vim-slime [0], which seems to date from 2007 and does have regions delimited with #%%.
Slime comes from Emacs originally, but I could not find if the original Emacs slime has regions.
Matlab also has those, which they call code sections [1]. Hard to find when they were introduced. Maybe 2021, but I suspect older.
None of those stores gge output of the command.
[0] https://vimawesome.com/plugin/vim-slime
[1] https://www.mathworks.com/help/matlab/matlab_prog/create-and...
The Spyder docs list other implementations of the percent format for notebooks as markdown and for delimiting runnable blocks within source code:
# %%
"^# %%"
Org-mode was released in 2003:
https://en.wikipedia.org/wiki/Org-modeOrg-mode supports code blocks: https://orgmode.org/manual/Structure-of-Code-Blocks.html :
#+BEGIN_SRC <language> and
#+END_SRC.
Literate programming; LaTeX:
https://en.wikipedia.org/wiki/Literate_programmingNotebook interface; Markdown + MathTeX: https://en.wikipedia.org/wiki/Notebook_interface ;
$ \delta_{2 3 4} = 5 $
$$ \Delta_\text{3 4 5} = 56 $$
Show HN: Aiopandas – Async .apply() and .map() for Pandas, Faster API/LLMs Calls
Can this be merged into pandas?
Pandas does not currently install tqdm by default.
pandas-dev/pandas//pyproject.toml [project.optional-dependencies] https://github.com/pandas-dev/pandas/blob/8943c97c597677ae98...
Dask solves for various adjacent problems; IDK if pandas, dask, or dask-cudf would be faster with async?
Dask docs > Scheduling > Dask Distributed (local) https://docs.dask.org/en/stable/scheduling.html#dask-distrib... :
> Asynchronous Futures API
Dask docs > Deploy Dask Clusters; local multiprocessing poll, k8s (docker desktop, podman-desktop,), public and private clouds, dask-jobqueue (SLURM,), dask-mpi: https://docs.dask.org/en/stable/deploying.html#deploy-dask-c...
Dask docs > Dask DataFrame: https://docs.dask.org/en/stable/dataframe.html :
> Dask DataFrames are a collection of many pandas DataFrames.
> The API is the same. The execution is the same.
> [concurrent.futures and/or @dask.delayed]
tqdm.dask: https://tqdm.github.io/docs/dask/#tqdmdask .. tests/tests_pandas.py: https://github.com/tqdm/tqdm/blob/master/tests/tests_pandas.... , tests/tests_dask.py: https://github.com/tqdm/tqdm/blob/master/tests/tests_dask.py
tqdm with dask.distributed: https://github.com/tqdm/tqdm/issues/1230#issuecomment-222379... , not yet a PR: https://github.com/tqdm/tqdm/issues/278#issuecomment-5070062...
dask.diagnostics.progress: https://docs.dask.org/en/stable/diagnostics-local.html#progr...
dask.distributed.progress: https://docs.dask.org/en/stable/diagnostics-distributed.html...
dask-labextension runs in JupyterLab and has a parallel plot visualization of the dask task graph and progress through it: https://github.com/dask/dask-labextension
dask-jobqueue docs > Interactive Use > Viewing the Dask Dashboard: https://jobqueue.dask.org/en/latest/clusters-interactive.htm...
https://examples.dask.org/ > "Embarrassingly parallel Workloads" tutorial re: "three different ways of doing this with Dask: dask.delayed, concurrent.Futures, dask.bag": https://examples.dask.org/applications/embarrassingly-parall...
Thank you for the input! To be honest, I don’t use Dask often, and as a regular Pandas user, I don’t feel the most qualified to comment—but here we go.
Can this be merged into Pandas?
I’d be honored if something I built got incorporated into Pandas! That said, keeping aiopandas as a standalone package has the advantage of working with older Pandas versions, which is useful for workflows where upgrading isn’t feasible. I also can’t speak to the downstream implications of adding this directly into Pandas.
Pandas does not install tqdm by default.
That makes sense, and aiopandas doesn’t require tqdm either. You can pass any class with __init__, update, and close methods as the tqdm argument, and it will work the same. Keeping dependencies minimal helps avoid unnecessary breakage.
What about Dask?
I’m not a regular Dask user, so I can’t comment much on its internals. Dask already supports async coroutines (Dask Async API), but for simple async API calls or LLM requests, aiopandas is meant to be a lightweight extension of Pandas rather than a full-scale parallelization framework. If you’re already using Dask, it probably covers most of what you need, but if you’re just looking to add async support to Pandas without additional complexity, aiopandas might be a more lightweight option.
Fair benchmarks would justify merging aiopandas into pandas. Benchmark grid axes: aiopandas, dtype_backend="pyarrow", dask-cudf
pandas pyarrow docs: https://pandas.pydata.org/docs/dev/user_guide/pyarrow.html
/? async pyarrow: https://www.google.com/search?q=async+pyarrow
/? repo:apache/arrow async language:Python : https://github.com/search?q=repo%3Aapache%2Farrow+async+lang... :
test_flight_async.py https://github.com/apache/arrow/blob/main/python/pyarrow/tes...
pyarrow/src/arrow/python/async.h: https://github.com/apache/arrow/blob/main/python/pyarrow/src... : "Bind a Python callback to an arrow::Future."
--
dask-cudf: https://docs.rapids.ai/api/dask-cudf/stable/ :
> Neither Dask cuDF nor Dask DataFrame provide support for multi-GPU or multi-node execution on their own. You must also deploy a dask.distributed cluster to leverage multiple GPUs. We strongly recommend using Dask-CUDA to simplify the setup of the cluster, taking advantage of all features of the GPU and networking hardware.
cudf.pandas > FAQ > "When should I use cudf.pandas vs using the cuDF library directly?" https://docs.rapids.ai/api/cudf/stable/cudf_pandas/faq/#when... :
> cuDF implements a subset of the pandas API, while cudf.pandas will fall back automatically to pandas as needed.
> Can I use cudf.pandas with Dask or PySpark?
> [Not at this time, though you can change the dask df to e.g. cudf, which does not implement the full pandas dataframe API]
--
dask.distributed docs > Asynchronous Operation; re Tornado or asyncio: https://distributed.dask.org/en/latest/asynchronous.html#asy...
--
tqdm.dask, tqdm.notebook: https://github.com/tqdm/tqdm#ipythonjupyter-integration
from tqdm.notebook import trange, tqdm
for n in trange(10):
time.sleep(1)
--But then TPUs instead of or in addition to async GPUs;
TensorFlow TPU docs: https://www.tensorflow.org/guide/tpu
Optimization by Decoded Quantum Interferometry
ScholarlyArticle: "Optimization by Decoded Quantum Interferometry" (2025) https://arxiv.org/abs/2408.08292
NewsArticle: "Quantum Speedup Found for Huge Class of Hard Problems" (2025) https://www.quantamagazine.org/quantum-speedup-found-for-hug...
High-fidelity entanglement between telecom photon and room-temp quantum memory
ScholarlyArticle: "High-fidelity entanglement between a telecom photon and a room-temperature quantum memory" (2025) https://arxiv.org/html/2503.11564v1
NewsArticle: "Scientists Achieve Telecom-Compatible Quantum Entanglement with Room-Temperature Memory" (2025) https://thequantuminsider.com/2025/03/19/scientists-achieve-...
Sound that can bend itself through space, reaching only your ear in a crowd
Which quantum operators can be found in [ultrasonic acoustic] wave convergences?
Surround sound systems must be calibrated in order to place the sweet spots of least cacophony.
There also exist ultrasonic scalpels that enable noninvasive subcutaneous surgical procedures that function by wave guiding to cause convergence.
"Functional ultrasound through the skull" (2025) https://news.ycombinator.com/item?id=42086408
"Neurosurgeon pioneers Alzheimer's, addiction treatments using ultrasound [video]" (2024) https://news.ycombinator.com/item?id=39556615
Did the Particle Go Through the Two Slits, or Did the Wave Function?
According to modern QFT, there are no particles except as an approximation. There are no fields except as mathematical formalisms. There's no locality. There is instead some kind of interaction of graph nodes, representing quantum interactions, via "entanglement" and "decoherence".
In this model, there are no "split particle" paradoxes, because there are no entities that resemble the behavior of macroscopic bodies, with our intuitions about them.
Imagine a Fortran program, with some neat index-based FOR loops, and some per-element computations on a bunch of big arrays. When you look at its compiled form, you notice that the neat loops are now something weird, produced by automatic vectorization. If you try to find out how it runs, you notice that the CPU not only has several cores that run parts of the loop in parallel, but the very instructions in one core run out of order, while still preserving the data dependency invariants.
"But did the computation of X(I) run before or after the computation of X(I+1)?!", you ask in desperation. You cannot tell. It depends. The result is correct though, your program has no bugs and computes what it should. It's counter-intuitive, but the underlying hardware reality is counter-intuitive. It's not illogical or paradoxical though.
This is incorrect. There are particles. They are excitations in the field.
There still is the 'split particle paradox' because QFT does not solve the measurement problem.
The 'some kind of interaction of graph nodes' by which I am guessing you are referring to Feynman diagrams are not of a fundamental nature. They are an approximation known as 'perturbation theory'.
I think what they must be referring to is the fact that particles are only rigorously defined in the free theory. When coupling is introduced, how the free theory relates to the coupled theory depends on heuristic/formal assumptions.
We're leaving my area of understanding, but I believe Haag's theorem shows that the naïve approach, where the interacting and free theories share a Hilbert space, completely fails -- even stronger than that, _no_ Hilbert space could even support an interacting QFT (in the ways required by scattering theory). This is a pretty strong argument against the existence of particles except as asymptotic approximations.
Since we don't have consensus on a well-defined, non-perturbative gauge theory, mathematically speaking it's difficult to make any firm statements about what states "exist" in absolute. (I'm certain that people working on the various flavours of non-perturbative (but still heuristic) QFT -- like lattice QFT -- would have more insights about the internal structure of non-asymptotic interactions.)
Though it doesn't resolve whether a "quanta" is a particle or a measurable convergence of waves, Electrons and Photons are observed with high speed imaging.
"Quantum microscopy study makes electrons visible in slow motion" https://news.ycombinator.com/item?id=40981054
There exist single photon emitters and single photon detectors.
Qualify that there are single photons if there are single photon emitters:
Single-photon source: https://en.wikipedia.org/wiki/Single-photon_source
QFT is not yet reconciled with (n-body) [quantum] gravity, which it has 100% error in oredicting. random chance. TOD
IIRC, QFT cannot explain why superfluid helium walks up the sides of a container against gravity, given the mass of each particle/wave of the superfluid and of the beaker and the earth, sun, and moon; though we say that gravity at any given point is the net sum of directional vectors acting upon said given point, or actually gravitational waves with phase and amplitude.
You said "gauge theory",
"Topological gauge theory of vortices in type-III superconductors" https://news.ycombinator.com/item?id=41803662
From https://news.ycombinator.com/context?id=43081303 .. https://news.ycombinator.com/item?id=43310933 :
> Probably not gauge symmetry there, then.
Generate impressive-looking terminal output, look busy when stakeholders walk by
The amount of time I spent getting asciifx and agg to work with syntax highlighting because IPython now has only Python Prompt Toolkit instead of deadline.
In order to leave Python coding demo tut GIF/MP4 on their monitor(s) at conferences or science fairs.
stemkiosk arithmetic in Python GIF v0.1.2: https://github.com/stemkiosk/stemkiosk/blob/e8f54704c6de32fb...
Ask HN: Project Management Class Recommendations?
I need to improve my skills in defining project goals, steps, and timelines and aligning teams around them.
Can anyone recommend some of their favorite online courses in this area? Particularly in technical environments.
Justified methods?
"How did software get so reliable without proof? (1996) [pdf]" https://news.ycombinator.com/item?id=42425617
CPM: Critical Path Method, resource scheduling, agile complexity estimation, planning poker, WBS Work Breakdown Structure: https://news.ycombinator.com/item?id=33582264#33583666
Project network: https://en.wikipedia.org/wiki/Project_network
Project planning: https://en.wikipedia.org/wiki/Project_planning
PERT: Program evaluation and review technique: https://en.m.wikipedia.org/wiki/Program_evaluation_and_revie...
Project management > Approaches of project management,: https://en.wikipedia.org/wiki/Project_management
PMI: Project Management Institute > Certifications: https://en.wikipedia.org/wiki/Project_Management_Institute#C...
PMBOK: Project Management Body of Knowledge > Contents: https://en.wikipedia.org/wiki/Project_Management_Body_of_Kno...
Agile software development > Methods: TDD, https://en.wikipedia.org/wiki/Agile_software_development
/? site:github.com inurl:awesome project management ; Books, Courses https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw...
Class Central > Project Management Courses and Certifications: https://www.classcentral.com/subject/project-management
Which processes scale without reengineering, once DevSecOps is automated with GitOps?
Brooks's law (observation) > Exceptions and possible solutions: https://en.wikipedia.org/wiki/Brooks%27s_law
Business process re-engineering > See also: https://en.wikipedia.org/wiki/Business_process_re-engineerin... :
Learning agenda: https://en.wikipedia.org/wiki/Learning Agenda , CLO
Agendas; Robert's Rules of Order has a procedure for motioning to add something to a meeting agenda.
Robert's Rules of Order: https://en.wikipedia.org/wiki/Robert%27s_Rules_of_Order
3 Questions; a status report to be ready with for team chat async or time bound: Since, Before, Obstacles
Stand-up meeting > Three questions: https://en.wikipedia.org/wiki/Stand-up_meeting#Three_questio...
Recursion kills: The story behind CVE-2024-8176 in libexpat
> Please leave recursion to math and keep it out of (in particular C) software: it kills and will kill again.
This is just nonsense. The issue is doing an unbounded amount of resource consuming work. Don't do an unbounded amount of resource consuming work, regardless of whether that work is expressed in a recursive or iterative form.
Any recursive function can be transformed into a tail recursive form, exchanging stack allocation for heap allocation. And any tail recursive program can be transformed into a loop (a trampoline). It's really not the language construct that is the issue here.
> Any recursive function can be transformed into a tail recursive form, exchanging stack allocation for heap allocation.
Can't all recursive functions be transformed to stack-based algorithms? And then, generally, isn't there a flatter resource consumption curve for stack-based algorithms, unless the language supports tail recursion?
E.g. Python has sys.getrecursionlimit() == 1000 by default, RecursionError, collections.deque; and "Performance of the Python 3.14 tail-call interpreter": https://news.ycombinator.com/item?id=43317592
> Can't all recursive functions be transformed to stack-based algorithms?
Yes, you can explicitly manage the stack. You still consume memory at the same rate, but now it is a heap-allocated stack that is programmatically controlled, instead of a stack-allocated stack that is automatically controlled.
The issue is either:
* whatever the parser is doing is already tail recursive, but not expressed in C in a way that doesn't consume stack. In this case it's trivial, but perhaps tedious, to convert it to a form that doesn't use unbounded resources.
* whatever the parser is doing uses the intermediate results of the recursion, and hence is not trivially tail recursive. In this case any reformulation of the algorithm using different language constructs will continue to use unbounded resources, just heap instead of stack.
It's not clear how the issue was fixed. The only description in the blog post is "It used the same mechanism of delayed interpretation" which suggests they didn't translate the existing algorithm to a different form, but changed the algorithm itself to a different algorithm. (It reads like they went from eager to lazy evaluation.)
Either way, it's not recursion itself that is the problem.
> You still consume memory at the same rate, but now it is a heap-allocated stack that is programmatically controlled, instead of a stack-allocated stack that is automatically controlled
To be precise; with precision:
In C (and IIUC now Python, too), a stack frame is created for each function call (unless tail call optimization is applied by the compiler).
To avoid creating an unnecessary stack frame for each recursive function call, instead create a collections.deque and add traversed nodes to the beginning or end enqueued for processing in a loop within one non-recursive function.
Is Tail Call Optimization faster than collections.deque in a loop in one function scope stack frame?
Yes, that is basically it. A tail call should be the same jump instruction as a loop, but performance really depends on the language implementation and is hard to make general statements about.
The Lost Art of Logarithms
Notes from "How should logarithms be taught?" (2021) https://news.ycombinator.com/item?id=28519356 re: logarithms in the Python standard library, NumPy, SymPy, TensorFlow, PyTorch, Wikipedia
Jupyter JEP: AI Representation for tools that interact with notebooks
(TIL about) MCP: Model Context Protocol; https://modelcontextprotocol.io/introduction
MCP Specification: https://spec.modelcontextprotocol.io/specification/draft/
/? Model Context Protocol: https://hn.algolia.com/?q=Model+Context+Protocol
From https://github.com/jupyter/enhancement-proposals/pull/129#is... :
> Would a JSON-LD 'ified nbformat and `_repr_jsonld_()` solve for this too?
There's a (closed) issue to: "Add JSONLD @context to the top level .ipynb node nbformat#44"
Datoviz: High-Performance GPU Scientific Visualization Library with Vulkan
"Datoviz: high-performance GPU scientific data visualization C/C++/Python library" https://github.com/datoviz/datoviz
> In the long term, Datoviz will mostly be used as a VisPy 2.0 backend.
ctypes bindings for Python
Matplotlib and MATLAB colormaps
0.4: WebGPU, Jupyter
... jupyter-xeus and JupyterLite; https://github.com/jupyter-xeus/xeus
From https://news.ycombinator.com/item?id=43201706 :
> jupyter-xeus supports environment.yml with jupyterlite with packages from emscripten-forge [a Quetz repo built with rattler-build]
> emscripten-forge src: https://github.com/emscripten-forge/recipes/tree/main/recipe... web: https://repo.mamba.pm/emscripten-forge
High-performance computing, with much less code
> The researchers implemented a scheduling library with roughly 2,000 lines of code in Exo 2, encapsulating reusable optimizations that are linear-algebra specific and target-specific (AVX512, AVX2, Neon, and Gemmini hardware accelerators). This library consolidates scheduling efforts across more than 80 high-performance kernels with up to a dozen lines of code each, delivering performance comparable to, or better than, MKL, OpenBLAS, BLIS, and Halide.
exo-lang/exo: https://github.com/exo-lang/exo
Proposed Patches Would Allow Using Linux Kernel's Libperf from Python
> Those interested in the prospects of leveraging the libperf API from Python code can see this RFC patch series for all the details: https://lore.kernel.org/lkml/20250313075126.547881-1-gautam@... :
>> In this RFC series, we are introducing a C extension module to allow python programs to call the libperf API functions. Currently libperf can be used by C programs, but expanding the support to python is beneficial for python users.
Beta: Connect your Colab notebooks directly to Kaggle's Jupyter Servers
"Kaggle's notebook environment is now based on Colab's Docker image" https://news.ycombinator.com/item?id=42480582
kaggle/docker-python: https://github.com/Kaggle/docker-python
Google Colaboratory docs > Local runtimes: https://research.google.com/colaboratory/local-runtimes.html :
> https://us-docker.pkg.dev/colab-images/public/runtime
docker run --gpus=all -p 127.0.0.1:9000:8080 us-docker.pkg.dev/colab-images/public/runtime
Is Rust a good fit for business apps?
Rust is hard to get started with, but once you reach optimal development speed, I can't see how you can go back to any other language.
I have a production application that runs on Rust. It never crashed (yet), it's rock solid and does not require frequent restarts of the container due to memory leaks or whatever, and every time I need to fix something, I can jump into code after weeks of not touching it, and being confident that my changes won't break anything, as long as the code complies (and is free of logical bugs).
I can't say the same about any other language, and I use a few of them in production: NodeJS, Ruby, Python, Java. Each of them has their own quirks, and I'm never 100% confident that changes in one place of the code won't cause harm in another place, or that the code is free of stupid bugs like null-pointer exceptions.
100% agreed. After writing Rust as my day job and using it in production for the last 2 years, my only criticism is the lack of a good high level standard library (like Go) and the insanity and fragmentation of Future type signatures.
Though at this point, I wish I could write everything in Rust - it's fantastic
Also, I can't wait to (practically) use it as a replacement for JavaScript on the web
So you can't write everything in rust? You mean like business apps? Just asking.
Last I checked, WASM support isn't quite there, yet, basically. I haven't checked in a little while, though.
re: noarch, emscripten-32, and/or emscripten-wasm32 WASM packages of rust on emscripten-forge a couple days ago. [1][2]
emscripten-forge is a package repo of conda packages for `linux-64 emscripten-32 emscripten-wasm32 osx-arm64 noarch` built with rattler-build and hosted with quetz: https://repo.mamba.pm/emscripten-forge
Evcxr is a rust kernel for jupyter; but jupyter-xeus is the new (cpp) way to write jupyterlite kernels like xeus-, xeus-sqlite, xeus-lua, xeus-javascript, xeus-javascript
[1]: evcxr_jupyter > "jupyter lite and conda-forge feedstock(s)" https://github.com/evcxr/evcxr/issues/399
[2]: emscripten-forge > "recipes_emscripten/rust and evxcr_jupyter kernel" https://github.com/emscripten-forge/recipes/issues/1983
container2wasm c2w might already compile rust to WASI WASM? https://github.com/container2wasm/container2wasm
Trump's Big Bet: Americans Will Tolerate Downturn to Restore Manufacturing
Is there are reason to think suddenly a bunch of manufacturing pops up and pushes prices down?
I’m not convinced there’s any real strategy here.
The SDGs have Goals, Targets, and Indicators.
What are the goals for domestic manufacturing?
FRED series tagged "Manufacturing" https://fred.stlouisfed.org/tags/series?t=manufacturing
"Manufacturers' New Orders: Total Manufacturing (AMTMNO)" https://fred.stlouisfed.org/series/AMTMNO
The DuckDB Local UI
The UI aesthetics look similar to the excellent Rill, also powered by DuckDB: https://www.rilldata.com/
Rill has better built in visualizations and pivot tables and overall a polished product with open-source code in Go/Svelte. But the DuckDB UI has very nice Jupyter notebook-style "cells" for editing SQL queries.
Rill founder here, I have no comment on the UI similarity :) but I would emphasize our vision is building DuckDB-powered metrics layers and exploratory dashboards -- which we presented at DuckCon #6 last month, PDF below [1] -- and less on notebook style UIs like Hex and Jupyter.
Rill is fully open-source under the Apache license. [2]
[1] https://blobs.duckdb.org/events/duckcon6/mike-driscoll-rill-...
WhatTheDuck does SQL with duckdb-wasm
Pygwalker does open-source descriptive statistics and charts from pandas dataframes: https://github.com/Kanaries/pygwalker
ydata-profiling does open-source Exploratory Data Analysis (EDA) with Pandas and Spark DataFrames and integrates with various apps: https://github.com/ydataai/ydata-profiling #integrations, #use-cases
xeus-sqlite is a xeus kernel for jupyter and jupyterlite which has Vega visualizations for sql queries: https://github.com/jupyter-xeus/xeus-sqlite
jupyterlite-xeus installs packages specified in an environment.yml from emscripten-forge: https://jupyterlite-xeus.readthedocs.io/en/latest/environmen...
emscripten-forge has xeus-sqlite and pandas and numpy and so on; but not yet duckdb-wasm: https://repo.mamba.pm/emscripten-forge
duckdb-wasm "Feature Request: emscripten-forge package" https://github.com/duckdb/duckdb-wasm/discussions/1978
Scientists discover an RNA that repairs DNA damage
ScholarlyArticle: "NEAT1 promotes genome stability via m6A methylation-dependent regulation of CHD4" (2025) https://genesdev.cshlp.org/content/38/17-20/915
"Supercomputer draws molecular blueprint for repairing damaged DNA" (2025) https://news.ycombinator.com/item?id=43349021
Supercomputer draws molecular blueprint for repairing damaged DNA
"Molecular architecture and functional dynamics of the pre-incision complex in nucleotide excision repair" (2025) https://www.nature.com/articles/s41467-024-52860-y
Also, "Scientists discover an RNA that repairs DNA damage" (2025) https://news.ycombinator.com/item?id=43313781
"NEAT1 promotes genome stability via m6A methylation-dependent regulation of CHD4" (2025) https://genesdev.cshlp.org/content/38/17-20/915
Tunable superconductivity and Hall effect in a transition metal dichalcogenide
ScholarlyArticle: "Tunable superconductivity coexisting with the anomalous Hall effect in a transition metal dichalcogenide" (2025) https://www.nature.com/articles/s41467-025-56919-2
D-Wave First to Demonstrate Quantum Supremacy on Useful, Real-World Problem
ScholarlyArticle: "Beyond-classical computation in quantum simulation" (2025) https://www.science.org/doi/10.1126/science.ado6285
Move over graphene Scientists forge bismuthene and host of atoms-thick metals
From https://news.ycombinator.com/item?id=43337868 :
> The new bismuth-based transistor could revolutionize chip design, offering higher efficiency while bypassing silicon’s limitations [...]
Can bismuthene and similar be nanoimprinted?
Peer-to-peer file transfers in the browser
I keep a long list of browser based and CLI p2p file transfer tools[1].
LimeWire (which now has its crypto currency probably) has been on a rampage recently aquiring some of really good tools including ShareDrop and SnapDrop. https://pairdrop.net/ is the last one standing so far.
[1]: https://gist.github.com/SMUsamaShah/fd6e275e44009b72f64d0570...
/? inurl:awesome p2p site:github.com: https://google.com/search?q=inurl:awesome+p2p+site:github.co...
Peer-to-peer: https://en.wikipedia.org/wiki/Peer-to-peer
How to build your own replica of TARS from Interstellar
mujoco > "Renderer API" https://github.com/google-deepmind/mujoco/issues/2487 re: renderers in addition to Unity
Maybe a 3d robot on screen?
Low-power 2D gate-all-around logics via epitaxial monolithic 3D integration
ScholarlyArticle: "Low-power 2D gate-all-around logics via epitaxial monolithic 3D integration" (2025) https://www.nature.com/articles/s41563-025-02117-w
"China’s new silicon-free chip beats Intel with 40% more speed and 10% less energy" (2025) https://interestingengineering.com/innovation/chinas-chip-ru... :
> The new bismuth-based transistor could revolutionize chip design, offering higher efficiency while bypassing silicon’s limitations [...]
> Their newly developed 2D transistor is said to be 40% faster than the latest 3-nanometre silicon chips from Intel and TSMC while consuming 10% less energy. This innovation, they say, could allow China to bypass the challenges of silicon-based chipmaking entirely.
> “It is the fastest, most efficient transistor ever,” according to an official statement published last week on the PKU website.
Does Visual Studio rot the mind? (2005)
Absolutely it does. In the same way Socrates warned us about books, VS externalizes our memory and ability. and makes us reliant on a tool to accomplish something we have the ability to do without. This reliance goes even further to make us dependent on it as our natural ability withers from neglect.
I cannot put it more plainly that it incentives us to make a part of us atrophy. It would be like us giving up the ability to run a mile because our reliance on cars weakened our legs and de-conditioned us to the point of making it physically impossible.
Cloudflare: New source of randomness just dropped
Another source of random entropy better than a wall of lava lamps:
>> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
But is that good enough random for rngd to continually re-seed an rng with?
(Is our description of such measurable fluctuations in the quantum foam inadequate to predict what we're calling random?)
> google/paranoid_crypto.lib.randomness_tests: https://github.com/google/paranoid_crypto/tree/main/paranoid... .. docs: https://github.com/google/paranoid_crypto/blob/main/docs/ran...
Stem cell therapy trial reverses "irreversible" damage to cornea
From https://news.ycombinator.com/item?id=37204123 (2023) :
> From "Sight for sore eyes" (2009) https://newsroom.unsw.edu.au/news/health/sight-sore-eyes :
>> "A contact lens-based technique for expansion and transplantation of autologous epithelial progenitors for ocular surface reconstruction" (2009) http://dx.doi.org/10.1097/TP.0b013e3181a4bbf2
>> In this study, they found that only one brand (Bausch and Lomb IIRC) of contact lens worked well as a scaffold for the SC
Good to see stem cell research in the US.
Stem cell laws and policy in the United States: https://en.m.wikipedia.org/wiki/Stem_cell_laws_and_policy_in...
If you witness a cardiac arrest, here's what to do
From https://news.ycombinator.com/item?id=39850383#39863280 :
> Basic life support (BLS) https://en.wikipedia.org/wiki/Basic_life_support :
>> DRSABCD: Danger, Response, Send for help, Airway, Breathing, CPR, Defibrillation
> "Drs. ABCD"
From "Defibrillation devices save lives using 1k times less electricity" (2024) https://news.ycombinator.com/item?id=42061556 :
> "New defib placement increases chance of surviving heart attack by 264%" (2024) https://newatlas.com/medical/defibrillator-pads-anterior-pos... :
>> Placing defibrillator pads on the chest and back, rather than the usual method of putting two on the chest, increases the odds of surviving an out-of-hospital cardiac arrest by more than two-and-a-half times, according to a new study.
From SBA.gov blog > "Review Your Workplace Safety Policies" (2019) https://www.sba.gov/blog/review-your-workplace-safety-polici... :
> Also, consider offering training for CPR to employees. Be sure to have an automatic external defibrillator (AED) on site and have employees trained on how to use it. The American Red Cross and various other organizations offer free or low-cost training.
Ask HN: Optical Tweezers for Neurovascular Resection?
Applications, Scale, Feasibility?
Optical tweezers: https://en.wikipedia.org/wiki/Optical_tweezers
NewsArticle: "Engineers create a chip-based tractor beam for biological particles" (2024) https://phys.org/news/2024-10-chip-based-tractor-biological-...
ScholarlyArticle: "Optical tweezing of microparticles and cells using silicon-photonics-based optical phased arrays" (2024) https://www.nature.com/articles/s41467-024-52273-x
Europe bets once again on RISC-V for supercomputing
China recently moved that direction. That would be nice collaboration to see between EU and China.
China to publish policy to boost RISC-V chip use nationwide, sources say https://www.reuters.com/technology/china-publish-policy-boos...
If you ignore the military ambitions of China and the fact they’re openly sharing technology with Russia, perhaps.
I don’t see anything but regret for Europe several decades from now if they decide to start providing China with the technical expertise they’re currently lacking in this space.
This is all about China trying to find a way to escape the pressure of sanctions from Europe and the US.
Didn't ARM start in Europe?
And RISC-V started at UC Berkeley in 2010.
RISC-V: https://en.wikipedia.org/wiki/RISC-V
"Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490 with nanoimprinting (which 10x's current gen nanolithography FWIU)
"Nanoimprint Lithography Aims to Take on EUV" (2025) https://news.ycombinator.com/item?id=42575111 :
> Called nanoimprint lithography (NIL), it’s capable of patterning circuit features as small as 14 nanometers—enabling logic chips on par with Intel, AMD, and Nvidia processors now in mass production.
Online Embedded Rust Simulator
TX for this! My older kid and I are learning Rust with Rustlings, but this will help me add a physical element to it (even though it's simulated), eventually we may get an esp32. Younger kid loves doing Microsoft microbit, he will definitely be interested in this.
Wokwi also supports Pi Pico w/ Python: https://news.ycombinator.com/item?id=38034530 , https://news.ycombinator.com/item?id=36970206
This kit connects a BBC Microbit v2 to a USB-chargeable Li-Ion battery on a mountable expansion board with connectors for Motors for LEGOs ® and a sonar:bit ultrasonic sensor: "ELECFREAKS micro:bit 32 IN 1 Wonder Building Kit, Programmable K12 Educational Learning Kit with [MOC blocks / Technics®] Building Blocks/Sensors/Wukong Expansion Board" https://shop.elecfreaks.com/products/elecfreaks-micro-bit-32...
There are docs on GitHub for the kit: https://github.com/elecfreaks/learn-en/tree/master/microbitK... .. web: https://wiki.elecfreaks.com/en/microbit/building-blocks/wond...
"Raspberry Pico Rust support" https://github.com/wokwi/wokwi-features/issues/469#issuecomm... :
> For future people who come across this issue: you can still simulate Rust on Raspberry Pi Pico with Wokwi, but you'll have to compile the firmware yourself. Then you can load it into Wokwi for VS Code or Wokwi for Intellij.
Scientists Confirm the Existence of 'Second Sound'
ScholarlyArticle: "Thermography of the superfluid transition in a strongly interacting Fermi gas" (2025) https://www.science.org/doi/10.1126/science.adg3430
New visualization but not a new phenomenon. I observed it in the 510 lab when I was getting my physics PhD at Cornell.
Armchair physicist with dissonance about superfluids (or Bose-Einstein condensates) which break all the existing models.
And my armchair physicist notes; /? superfluid https://westurner.github.io/hnlog/ #search=superfluid
I think I've probably already directly mentioned Fedi's?
Finally found this; https://news.ycombinator.com/item?id=42957014 :
> Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
- [ ] Models fluidic attractor systems
- [ ] Models superfluids
- [ ] Models n-body gravity in fluidic systems
- [ ] Models retrocausality
Re: gauge theory, superfluids: https://news.ycombinator.com/item?id=43081303
He said there's a newer version of this:
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
People Are Paying $5M and $1M to Dine with Donald Trump
Sleaziest thing I've ever heard!
Isn't that illegal influence peddling?
For the record, when Buffet auctions a meeting for pay it's for charity; and he's not on the clock as a public servant.
For context,
Change.org (2007) https://en.wikipedia.org/wiki/Change.org
We The People (2009-01) https://en.wikipedia.org/wiki/We_the_People_(petitioning_sys... :
> The right "to petition the Government for a redress of grievances" is guaranteed by the United States Constitution's First Amendment. [...]
> Overview > Thresholds: Under the Obama administration's rules, a petition had to reach 150 signatures (Dunbar's Number) within 30 days to be searchable on WhiteHouse.gov, according to Tom Cochran, former director of digital technology. [8] It had to reach 100,000 signatures within 30 days to receive an official response. [9] The original threshold was set at 5,000 signatures on September 1, 2011,[10] was raised to 25,000 on October 3, 2011, [11] and raised again to 100,000 as of January 15, 2013. [12] The White House typically would not comment when a petition concerned an ongoing investigation. [13]
> Sleaziest
Sorry, that's ad hominem (name calling) but not a valid argument.
Is it civilly or criminally illegal for the standing president to do "sell the plate" fundraisers for PACs that aren't kickbacks?
Where is the current record of such receipts?
Kickback (bribery): https://en.wikipedia.org/wiki/Kickback_(bribery)
Influence peddling: https://en.wikipedia.org/wiki/Influence_peddling
US Constitution > Article II > Section 4: https://constitution.congress.gov/constitution/article-2/#ar...
> The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.
Impeachment in the United States: https://en.wikipedia.org/wiki/Impeachment_in_the_United_Stat... :
> The Constitution limits grounds of impeachment to "Treason, Bribery, or other high Crimes and Misdemeanors", [2] but does not itself define "high crimes and misdemeanors".
Presumably, Bribery needn't be defined in the Constitution because such laws and rules are the business of the Legislative and the Judicial branches, and such rules apply to all people.
Buffett is not a public servant in any way, on or off the clock. He is entirely a private citizen.
I agree. As a fund or holding company manager, Mr. Buffet has a fiduciary obligation to shareholders.
The president has a fiduciary obligation to the country.
https://harvardlawreview.org/print/vol-132/faithful-executio...
In terms of fiduciary and Constitutional - or Constitutional (including fiduciary) - obligations, is there any penalty for violating an Oath of Office?
US Constitution > Article II.S1.C8.1 "Oath of Office for the Presidency": https://constitution.congress.gov/browse/essay/artII-S1-C8-1... :
> Before he enter on the Execution of his Office, he shall take the following Oath or Affirmation:– I do solemnly swear (or affirm) that I will faithfully execute the Office of President of the United States, and will to the best of my Ability, preserve, protect and defend the Constitution of the United States.
US Constitution > Article II.S1.C8.1.5 "Violation of the Presidential Oath" https://constitution.congress.gov/browse/essay/artII-S1-C8-1...
Oath of office of the president of the United States: https://en.wikipedia.org/wiki/Oath_of_office_of_the_presiden...
Impeachment, removal from office, being barred from ever holding office again. Those are separate but need to happen in order.
That’s the limit of it, by design I think. The lack of criminality or even civil offense means the job is predicated on trust to achieve political goals. Once trust is lost, the individual must be removed from office or immense damage will ensue.
I've never heard of double jeopardy for impeachment; being impeached does not preclude criminal prosection for the same offense.
FBI's assessment of presidential immunity might should be reviewed in light of the court's recent ruling on same and the fact that they are an executive branch department of government. They work for the executive - as evidenced by the firing of James Comey - so we can't trust their assessment of his immunity.
AI tools are spotting errors in research papers: inside a growing movement
AI tools are introducing errors in research papers.
Wonder which is more common overall? Can AI spot more errors than it creates, or is it in equillibrium and a net zero?
Other than these quips, I am actually a fan of this movement. Anything that helps in the scientific process of peer review, as long as it does not actively annoy and delay the authors is a welcome addition to the process. Papers will be written, many of them incorrectly, with statistical or procedural errors. To have a checker that can find these quickly, ideally pre-publishing, is a great thing.
Also pro error finding with LLMs.
But concerned about tool dependency; if you can't do it without the AI, you can't support the code.
"LLMs cannot find reasoning errors, but can correct them" (2023) https://news.ycombinator.com/item?id=38353285
"New GitHub Copilot research finds 'downward pressure on code quality'" (2024) https://news.ycombinator.com/item?id=39168105
"AI generated code compounds technical debt" (2025) https://news.ycombinator.com/item?id=43185735 :
> “I don't think I have ever seen so much technical debt being created in such a short period of time"
Even if trained on only formally verified code, we should not expect LLMs to produce code that passes formal verification.
"Ask HN: Are there any objective measurements for AI model coding performance?" (2025) https://news.ycombinator.com/item?id=43206779
https://news.ycombinator.com/item?id=43061977#43069287 re: memory vulns, SAST DAST, Formal Verification, awesome-safety-critical
Covid-19 speeds up artery plaque growth, raising heart disease risk
From https://news.ycombinator.com/item?id=40681226 :
> "Cyclodextrin promotes atherosclerosis regression via macrophage reprogramming" (2016) https://www.science.org/doi/10.1126/scitranslmed.aad6100
> "Powdered Booze Could Fix Your Clogged Arteries" (2016) https://www.popsci.com/compound-in-powdered-alcohol-can-also...
FWIU, beta-cyclodextrin is already FDA approved, and injection of betacyclodextrin reversed arterio/atherosclerosis; possibly because our arteries are caked with sugar alcohol and beta-cyclodextrin absorbs alcohol.
Asteroid fragments upend theory of how life on Earth bloomed
> Not only does Bennu contain all 5 of the nucleobases that form DNA and RNA on Earth and 14 of the 20 amino acids found in known proteins, the asteroid’s amino acids hold a surprise.
> On Earth, amino acids in living organisms predominantly have a ‘left-handed’ chemical structure. Bennu, however, contains nearly equal amounts of these structures and their ‘right-handed’, mirror-image forms, calling into question scientists’ hypothesis that asteroids similar to this one might have seeded life on Earth.
Hm, why would chirality need to be a consequence of the panspermia hypothesis? I thought the mission defined "necessary ingredients for life" as any bio markers that might have seeded a primordial soup on Earth.
I don’t know a ton about how chirality works. Couldn’t it just be that half(or some number) the asteroids contain left handed and half contain right? We only have a sample of one. Or is there something fundamental about left handed molecules that gives us reason to believe that if we see right handed ones once we would rarely see left handed ones in similar disconnected systems?
Chirality means that there is a mirror image of a molecule that cannot be twisted into the original shape, despite being structurally identical. Due to the particular ways molecules tickle each other in living organisms to do interesting things, that means that the mirror image (racemate) of a molecule does something different.
In chemical synthesis, most (but not all) processes tend to preserve chirality of molecules: replacing a bunch of atoms in a molecule with another set will tend to not cause the molecule to flip to a mirror image. If you start from an achiral molecule (one where its mirror image can be rotated to the original), almost all processes tend to end up with a 50-50 mix of the two racemates of the product.
In biochemistry, you can derive all of the amino acids and sugars from a single chiral molecule: glyceral. It turns out that nearly all amino acids end up in the form derived from L-glyceral and nearly all sugars come from D-glyceral. The question of why this is the case is the question of homochirality.
There's as yet no full answer to the question of homochirality. We do know that a slight excess in one racemate tends to amplify into a situation where only that racemate occurs. But we don't know if the breakdown into L-amino acids and D-sugars (as opposed to D-amino acids and L-sugars) happened by pure chance or if there is some specific reason that L-amino acids/D-sugars is preferred.
"I Applied Wavelet Transforms to AI and Found Hidden Structure" https://news.ycombinator.com/item?id=42956262 re: CODES, chirality, and chiral molecules whose chirality results in locomotion
Do any of these affect the fields that would have selected for molecules on Earth? The Sun's rotation, Earth's rotation, the direction of revolution in our by now almost coplanar solar system, Galactic rotation
Launch HN: Enhanced Radar (YC W25) – A safety net for air traffic control
Hey HN, we’re Eric and Kristian of Enhanced Radar. We’re working on making air travel safer by augmenting control services in our stressed airspace system.
Recent weeks have put aviation safety on everyone’s mind, but we’ve been thinking about this problem for years. Both of us are pilots — we have 2,500 hours of flight time between us. Eric flew professionally and holds a Gulfstream 280 type rating and both FAA and EASA certificates. Kristian flies recreationally, and before this worked on edge computer vision for satellites.
We know from our flying experience that air traffic management is imperfect (every pilot can tell stories of that one time…), so this felt like an obvious problem to work on.
Most accidents are the result of an overdetermined “accident chain” (https://code7700.com/accident_investigation.htm). The popular analogy here is the swiss cheese model, where holes in every slice line up perfectly to cause an accident. Often, at least one link in that chain is human error.
We’ll avoid dissecting this year’s tragedies and take a close call from last April at DCA as an example:
The tower cleared JetBlue 1554 to depart on Runway 04, but simultaneously a ground controller on a different frequency cleared a Southwest jet to cross that same runway, putting them on a collision course. Controllers noticed the conflict unfolding and jumped in to yell at both aircraft to stop, avoiding a collision with about 8 seconds to spare (https://www.youtube.com/watch?v=yooJmu30DxY).
Importantly, the error that caused this incident occurred approximately 23 seconds before the conflict became obvious. In this scenario, a good solution would be a system that understands when an aircraft has been cleared to depart from a runway, and then makes sure no aircraft are cleared to cross (or are in fact crossing) that runway until the departing aircraft is wheels-up. And so on.
To do this, we’ve developed Yeager, an ensemble of models including state of the art speech-to-text that can understand ATC audio. It’s trained on a large amount of our own labeled ATC audio collected from our VHF receivers located at airports around the US. We improve performance by injecting context such as airport layout details, nearby/relevant navaids, and information on all relevant aircraft captured via ADS-B.
Our product piggy-backs on the raw signal in the air (VHF radio from towers to pilots) by having our own antennas, radios, and software installed at the airport. This system is completely parallel to existing infrastructure, requires zero permission, and zero integration. It’s an extra safety net over existing systems (no replacement required). All the data we need is open-source and unencrypted.
Building models for processing ATC speech is our first step toward building a safety net that detects human error (by both pilots and ATC). The latest system transcribes the VHF control audio at about ~1.1% WER (Word Error Rate), down from a previous record of ~9%. We’re using these transcripts with NLP and ADS-B (the system that tracks aircraft positions in real time) for readback detection (ensuring pilots correctly repeat ATC instructions) and command compliance.
There are different views about the future of ATC. Our product is naturally based on our own convictions and experience in the field. For example, it’s sometimes said that voice comms are going away — we think they aren’t (https://www.ericbutton.co/p/speech). People also point out that airplanes are going to fly themselves — in fact they already do. But passenger airlines, for example, will keep a pilot onboard (or on the ground) with ultimate control, for a long time from now; the economics and politics and mind-boggling safety and legal standards for aviation make this inevitable. Also, while next-gen ATC systems like ASDE-X are already in place, they don’t eliminate the problem. The April 2024 scenario mentioned above occurred at DCA, an ASDE-X-equipped airport.
America has more than 5,000 public-use airports, but only 540 of these have control towers (due to cost). As a result, there are over 100 commercial airline routes that fly into uncontrolled airports, and 4.4M landings at these fields. Air traffic control from first principles looks significantly more automated, more remote-controlled, and much cheaper — and as a result, much more widespread.
We’ve known each other for 3 years, and decided independently that we needed to work on air traffic. Having started on this, we feel like it’s our mission for the next decade or two.
If you’re a pilot or an engineer who’s thought about this stuff, we’d love to get your input. We look forward to hearing everyone’s thoughts, questions, ideas!
Can any aircraft navigation system plot drone Remote ID beacons on a map?
How sensitive of a sensor array is necessary to trilaterate Remote ID signals and birds for aircraft collision avoidance?
A Multispectral sensor array (standard) would probably be most robust.
From https://news.ycombinator.com/item?id=40276191 :
> Are there autopilot systems that do any sort of drone, bird, or other aerial object avoidance?
A lot of drones these days will have ADS-B. The ones that don't probably have geo-fencing to keep them away from airports. There's also all kinds of drone detection systems based on RF emittance.
The bird problem is a whole other issue. Mostly handled by PIREP today if birds are hanging out around an approach/departure path.
Computer vision here is definitely going to be useful long term.
FWIU geo-fencing was recently removed from one brand of drones.
Thermal: motors, chips, heatsinks, and batteries are warm but the air is colder around propellers; RF: motor RF, circuit RF, battery RF, control channel RF, video channel RF, RF from federally required Remote ID or ADS-B beacons, gravitational waves
Aircraft have less time to recover from e.g. engine and windshield failure at takeoff and landing at airports; so all drones at airports must be authorized by ATC Air Traffic Control: it is criminally illegal to fly a drone at the airport without authorization because it endangers others.
Tagging a bird on the 'dashcam'+radar+sensors feed could create a PIREP:
PIREP: Pilot's Report: https://en.wikipedia.org/wiki/Pilot_report
Looks like "birds" could be coded as /SK sky cover, /WX weather and visiblity, or /RM remarks with the existing system described on wikipedia.
Prometheus (originally developed by SoundCloud) does pull-style metrics: each monitored server hosts over HTTP(S) a document in binary prometheus format that the centralized monitoring service pulls from whenever they get around to it. This avoids swamping (or DOS'ing) the centralized monitoring service which must scale to the number of incoming reports in a push-style monitoring system.
All metrics for the service are included in the one (1) prometheus document, which prevents requests for monitoring data from exhausting the resources of the monitored server. It is up to the implementation to determine whether to fill with nulls if sensor data is unavailable, or to for example fill forward with the previous value if sensor data is unavailable for one metric.
Solutions for birds around runways and in flight paths and around wind turbines:
- Lights
- Sounds: human audible, ultrasonic
- Thuds: birds take flight when the ground shakes
- Eyes: Paint large eyes on signs by the runways
> Sounds and Thuds [that scare birds away]
In "Glass Antenna Turns windows into 5G Base Stations" https://news.ycombinator.com/item?id=41592848 or a post linked thereunder, I mentioned ancient stone lingams on stone pedestals which apparently scare birds away from temples when they're turned.
/? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
Are some ancient stone lingams also piezoelectric voice transducer transmitters, given water over copper or gold between the lingam and pedestal and given the original shape of the stones? Also, stories of crystals mounted on pyramids and towers.
Could rotating large stones against stone scare birds away from runways?
Remote ID: https://en.wikipedia.org/wiki/Remote_ID
Airborne collision avoidance system: https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...
"Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?" https://news.ycombinator.com/item?id=42665458
Are there already bird not a bird datasets?
Procedures for creating "bird on Multispectral plane radar and video" dataset(s):
Tag birds on the dashcam video with timecoded sensor data and a segmentation and annotation tool.
Pinch to zoom, auto-edge detect, classification probability, sensor status
voxel51/fiftyone does segmentation and annotation with video and possibly Multispectral data: https://github.com/voxel51/fiftyone
Oh, weather radar would help pilots too:
From https://news.ycombinator.com/item?id=43260690 :
> "The National Weather Service operates 160 weather radars across the U.S. and its territories. Radar detects the size and motion of particles in rain, snow, hail and dust, which helps meteorologists track where precipitation is falling. Radar can even indicate the presence of a tornado [...]"
From today, just now; fish my wish!
"Integrated sensing and communication based on space-time-coding metasurfaces" (2025-03) https://news.ycombinator.com/item?id=43261825
NASA uses GPS on the moon for the first time
> the Lunar GNSS Receiver Experiment (LuGRE) [is] one of the 10 projects packed aboard Blue Ghost. [...]
> However, LuGRE’s achievements didn’t only begin after touchdown on the moon. On January 21, the instrument broke NASA’s record for highest altitude GNSS signal acquisition at 209,900 miles from Earth while traveling to the moon. That record continued to rise during Blue Ghost’s journey over the ensuing days, peaking at 243,000 miles from Earth after reaching lunar orbit on February 20.
New Benchmark in Quantum Computational Advantage with 105-Qubit Processor
ScholarlyArticle: "Establishing a New Benchmark in Quantum Computational Advantage with 105-qubit Zuchongzhi 3.0 Processor" (2025) https://arxiv.org/abs/2412.11924
NewsArticle: "Superconducting quantum processor prototype operates 10^15 times faster than fastest supercomputer" https://phys.org/news/2025-03-superconducting-quantum-proces... :
> The quantum processor achieves a coherence time of 72 μs, a parallel single-qubit gate fidelity of 99.90%, a parallel two-qubit gate fidelity of 99.62%, and a parallel readout fidelity of 99.13%. The extended coherence time provides the necessary duration for performing more complex operations and computations. [...]
> To evaluate its capabilities, the team conducted an 83-qubit, 32-layer random circuit sampling task on the system. Compared to the current optimal classical algorithm, the computational speed surpasses that of the world's most powerful supercomputer by 15 orders of magnitude. Additionally, it outperforms the latest results published by Google in October of last year by 6 orders of magnitude, establishing the strongest quantum computational advantage in the superconducting system to date.
Integrated sensing and communication based on space-time-coding metasurfaces
ScholarlyArticle: "Integrated sensing and communication based on space-time-coding metasurfaces" (2025) https://www.nature.com/articles/s41467-025-57137-6
NewsArticle: "Space-time-coding metasurface could transform wireless networks with dual-functionality for 6G era" https://techxplore.com/news/2025-03-space-coding-metasurface...
How the U.K. broke its own economy
This video explains who paid for Brexit and Trump: "Canadian company tied to Brexit and Trump backers" (2019) https://youtube.com/watch?v=alNjJVpO8L4&
Cambridge Analytica > United Kingdom: https://en.wikipedia.org/wiki/Cambridge_Analytica#Channel_4_...
"New Evidence Emerges of Steve Bannon and Cambridge Analytica’s Role in Brexit" (2018) https://www.newyorker.com/news/news-desk/new-evidence-emerge...
"‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower" (2018) https://www.theguardian.com/news/2018/mar/17/data-war-whistl...
Facebook–Cambridge Analytica data scandal: https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana... ($4b FTC fine)
Furthermore,
"3.5M Voters Were Purged During 2024 Presidential Election [video]" (2025) https://news.ycombinator.com/item?id=43047349
They only "won" by like 1.5m votes.
Who's illegitimate? https://www.google.com/search?q=Trump+birthrarism
Live Updates: China and Canada Retaliate Against New Trump Tariffs
I'm learning that when things go to shit smart money looks for opportunity.
Anyone want to start up a company selling pre-fab shipping-container houses with me?
More broadly, the kind of people who make money under tariff schemes already have investment managers.
I haven’t heard a Keynesian say these tariffs will achieve their stated purpose.
The US tariffs hurt US consumers and Canadian companies. The counter tariffs do the reverse. However, the US seem to be all-product while the Canadian ones seem tailored to "encourage" selection of a non-US alternative (and are geo-located as it were). Is the US ready to experience some pain, as those outside the US have been prepping for this sh*t for a while.
The counter tariffs are usually explicitly chosen to make it not hurt their own citizens while hurting red states, because alternative sources of the tariffed goods exists domestically or from other parts of the world. Americans need aluminum and steel and lumber and power. Canada does not need Kentucky whiskey and Harleys.
Kinda, yeah, and I get that everyone well have their pet good that they feel shouldn't be tariffed, but including fruits/vegetables/legumes very much hurts the consumer.
China seemed to focus on agricultural goods which it has been pushing for local industries for decades anyways. I think this was an olive branch in that it didn't really try to turn the screws?
Because of live updates, this snapshot is necessarily out of date: https://archive.ph/qK2mx
IKEA registered a Matter-over-Thread temperature sensor with the FCC
Matter (standard) https://en.wikipedia.org/wiki/Matter_(standard)
Connectivity Standards Alliance > Certified products: https://csa-iot.org/csa-iot_products/?p_keywords=Thermometer...
Nanoscale spin rectifiers for harvesting ambient radiofrequency energy
"Nanoscale spin rectifiers for harvesting ambient radiofrequency energy" (2024) https://www.nature.com/articles/s41928-024-01212-1
From "NUS researchers develop new battery-free technology to power electronic devices using ambient radiofrequency signals" (2024) https://news.nus.edu.sg/nus-researchers-develop-new-battery-... :
> To address these challenges, a team of NUS researchers, working in collaboration with scientists from Tohoku University (TU) in Japan and University of Messina (UNIME) in Italy, has developed a compact and sensitive rectifier technology that uses nanoscale spin-rectifier (SR) to convert ambient wireless radio frequency signals at power less than -20 dBm to a DC voltage.
> The team optimised SR devices and designed two configurations: 1) a single SR-based rectenna operational between -62 dBm and -20 dBm, and 2) an array of 10 SRs in series achieving 7.8% efficiency and zero-bias sensitivity of approximately 34,500 mV/mW. Integrating the SR-array into an energy harvesting module, they successfully powered a commercial temperature sensor at -27 dBm.
Passive Wi-Fi or "backscatter redirection Wi-Fi": https://en.wikipedia.org/wiki/Passive_Wi-Fi :
> The system used tens of microwatts of power, [2] 10^−4 less energy than conventional Wi-fi devices, and one thousandth the energy of Bluetooth LE and Zigbee communications standards. [1]
Ask HN: Where are the good Markdown to PDF tools (that meet these requirements)?
I'm trying to convert a very large Markdown file (a couple hundred pages) to PDF.
It contains lots of code in code blocks and has a table of contents at the start with internal links to later pages.
I've tried lots of different Markdown-PDF converters like md2pdf and Pandoc, even trying converting it through LaTeX first, however none of them produce working internal PDF links, have effective syntax highlighting for HTML, CSS, JavaScript and Python, and wrap code to fit it on the page.
I have a very long regular expression (email validation of course) that doesn't fit on one line but no solutions I have found properly break the lines on page overflow.
What tools does everyone recommend?
Understanding Smallpond and 3FS
smallpond: https://github.com/deepseek-ai/smallpond :
> A lightweight data processing framework built on DuckDB and 3FS.
[deleted]
SEC Declares Memecoins Are Not Subject to Oversight
By SEC, or in law in the US because DOGE?
Can QBT Qualified Blind Trusts own memecoins, or is that still a conflict of interest? https://news.ycombinator.com/item?id=43201808
trump sets the direction of SEC enforcement and he's made it clear that he thinks he (and everyone else) should be able to rob normal people via cryptocurrency.
Rob Trump or a crony and I bet the rules of the game change fast.
These people do not have principles.
Collectible dolls aren't SEC regulated as investments. But they're still subject to laws protecting property rights.
Collectibles are also subject to financial reporting requirements that apply according to the USD value at time of purchase and sale.
As I said before in the past regarding "Elephant in the room: Quantum computers will destroy Bitcoin" https://news.ycombinator.com/item?id=43188345#43188777 :
> The market does not appear to cost infosec value, risk, or technical debt into cryptoasset prices.
> PQ or non-PQ does not predict asset price in 2025-02.
But what about DOGE?
Jeff Foxworthy, a comedian, once said:
> Sophisticated people invest their money in stock portfolios.
> Rednecks invest their money in commemorative plates.
Are NFTs or memecoins more similar to (NASCAR) commemorative plates?
That is the perfect analogy both for what NFTs are and who buys them. I’m stealing that.
So do we blame Biden or Trump for delisting XRP, or is SEC respectably independent?
What about CFTC and FTC and the CAT Consolidated Audit Trail; are collectibles over 10K exempt from KYC and AML there too?
(By comparison, banks put days-long holds on large checks.)
There's no such thing as an "independent" executive-branch agency. Congress can't by law create new branches of government.
Is that really true: isn't the Federal Reserve (intentionally designed to be) very independent from both Congress and the White House?
The Federal Reserve is a bit odd since it's a mix of private corporations & public governance. The Federal Reserve Board of Governors (BOG) is part of the government, the Federal Reserve Banks are private corporations whose officer's salaries are approved by the BOG but whose officers are elected by the Member Banks (banks in the US are required to be members). The Federal Open Market Committee (FOMC) is a mix of the BOG & some of the presidents of the various Federal Reserve Banks.
So the BOG isn't independent of the executive branch (it legally can't be), but the FOMC is partly independent since it's a mix of executive branch employees & private bank employees.
That's generally correct, but the issue is more about what kinds of powers are being exercised. Congress can create government-owned corporations, like Amtrak. Those corporations can function independently, insofar as they are not exercising executive powers.
The core function of the Fed isn't an executive power. It's not enforcing the law, or interacting with foreign countries. It's a bank that lends to other banks, and influences the market through that economic function. That's not an executive power and doesn't need to be subject to executive control.
The Fed also performs some executive functions (promulgating and enforcing various regulations). I'd argue those must be under executive control. But that doesn't address the Fed's core interest-rate-setting function.
The recent tariff spat by the tantrum thrower is supposably justified because of the drug overdose rate.
All lives matter.
List of causes of death by rate: https://en.wikipedia.org/wiki/List_of_causes_of_death_by_rat... :
> Substance abuse: 0.58 %
What about the the other 99.4 % of the causes of death, in terms of federal priorities?
What about NIH and NSF funding for medical sciences research?
/? tariff authorization: https://tinyurl.com/certainplansforusall
"Are Drinking Straws Dangerous? (2017)" https://news.ycombinator.com/item?id=43041625
/? plastic straws executive order: https://www.google.com/search?q=plastic+straws+executive+ord...
The recently proposed budget would increase the deficit by 16 trillion dollars on 40 trillion?
Is an Amendment to the Constitution necessary to limit a right according to disability or to create an independent agency which is independent from the Executive (Justice), Legislative, and Judicial branches?
Define "Independent Agency"?
GSA General Services Administration: https://en.wikipedia.org/wiki/General_Services_Administratio... :
> The General Services Administration (GSA) is an independent agency of the United States government established in 1949 to help manage and support the basic functioning of federal agencies.
Independent agencies of the United States federal government: https://en.wikipedia.org/wiki/Independent_agencies_of_the_Un... :
> In the United States federal government, independent agencies are agencies that exist outside the federal executive departments (those headed by a Cabinet secretary) and the Executive Office of the President. [1] In a narrower sense, the term refers only to those independent agencies that, while considered part of the executive branch, have regulatory or rulemaking authority and are insulated from presidential control, usually because the president's power to dismiss the agency head or a member is limited.
> Established through separate statutes passed by Congress, each respective statutory grant of authority defines the goals the agency must work towards, as well as what substantive areas, if any, over which it may have the power of rulemaking. These agency rules (or regulations), when in force, have the power of federal law. [2]
However, like Acts of Congress and Executive Orders, such rules are not Constitutional Amendments.
> Examples of independent agencies: These agencies are not represented in the cabinet and are not part of the Executive Office of the president:
> [ Amtrak, CIA, FCC, FDIC, FEC, Federal Reserve, FERC, FTC, CFTC, SSA, TVA, NASA, NARA, OPM, ]
> Define "Independent Agency"?
I said independent executive-branch agency. The very first sentence of Article II says: "The executive Power shall be vested in a President of the United States of America." Congress can't create an agency that exercises "the executive Power" that's independent of the President. In the same way that Congress can't create an unelected mini-Congress that enacts laws binding on citizens, and can't create courts outside the judiciary branch that can convict people for federal crimes.
> [ Amtrak, CIA, FCC, FDIC, FEC, Federal Reserve, FERC, FTC, CFTC, SSA, TVA, NASA, NARA, OPM, ]
These entities all differ in whether they're exercising "executive power" or not. Amtrak doesn't meaningfully exercise executive power. Congress can provide for Amtrak to be independent of the President's control. Or to use another example, Congress could probably create a bank that provides student loans that's independent of Presidential control.
But the SEC is a quintessential executive-branch agency. It enacts rules that interpret the securities laws and can prosecute people for violations of securities laws.
> In a narrower sense, the term refers only to those independent agencies that, while considered part of the executive branch, have regulatory or rulemaking authority and are insulated from presidential control, usually because the president's power to dismiss the agency head or a member is limited.
Congress must confirm candidate appointees to independent agencies, otherwise they are not independent of the Executive.
Can the President terminate Congressionally-approved nominations without regard for their service? They can.
Is the President totally immune? They are not.
When can't the executive pardon themselves?
PEP 486 – Make the Python Launcher aware of virtual environments (2015)
It seems to me py launcher could have done the same thing as uv such as downloading and setting up python and managing virtual environments.
It may that there were so many distros of Python by the time that venv (and virtualenv and virtualenvwrapper) were written.
"PEP 3147 – PYC Repository Directories" https://peps.python.org/pep-3147/ :
> Linux distributions such as Ubuntu [4] and Debian [5] provide more than one Python version at the same time to their users. For example, Ubuntu 9.10 Karmic Koala users can install Python 2.5, 2.6, and 3.1, with Python 2.6 being the default. [...]
> Because these distributions cannot share pyc files, elaborate mechanisms have been developed to put the resulting pyc files in non-shared locations while the source code is still shared. Examples include the symlink-based Debian regimes python-support [8] and python-central [9]. These approaches make for much more complicated, fragile, inscrutable, and fragmented policies for delivering Python applications to a wide range of users. Arguably more users get Python from their operating system vendor than from upstream tarballs. Thus, solving this pyc sharing problem for CPython is a high priority for such vendors.
> This PEP proposes a solution to this problem.
> Proposal: Python’s import machinery is extended to write and search for byte code cache files in a single directory inside every Python package directory. This directory will be called __pycache__.
Should the package management tool also install multiple versions of the interpreter? conda, mamba, pixi, and uv do. Neither tox nor nox nor pytest care where the python install came from.
And then of course cibuildwheel builds binary wheels for Win/Mac/Lin and manylinux wheels for libc and/or musl libc. repairwheel, auditwheel, delocate, and delvewheel bundle shared library dependencies (.so and DLL) into the wheel, which is a .zip file with a .whl extension and a declarative manifest that doesn't require python code to run as the package installer.
https://news.ycombinator.com/item?id=42347468
repairwheel: https://github.com/jvolkman/repairwheel :
> It includes pure-python replacements for external tools like patchelf, otool, install_name_tool, and codesign, so no non-python dependencies are required.
pip used to support virtualenvs.
pip 0.2 (2008) https://pypi.org/project/pip/0.2/ :
> pip is complementary with virtualenv, and it is encouraged that you use virtualenv to isolate your installation.
https://pypi.org/project/pip/0.2/#using-pip-with-virtualenv :
pip install -E venvpath/ pkg1 pkg2
When was the -E <virtualenv> flag removed from pip and why? pip install --help | grep "\-E"
"PEP 453 – Explicit bootstrapping of pip in Python installations" https://peps.python.org/pep-0453/#changes-to-virtual-environ... :
> Python 3.3 included a standard library approach to virtual Python environments through the venv module. Since its release it has become clear that very few users have been willing to use this feature directly, in part due to the lack of an installer present by default inside of the virtual environment. They have instead opted to continue using the virtualenv package which does include pip installed by default.
"why venv install old pip?" re: `python -m ensurepip && python -m pip install -U pip` https://github.com/python/cpython/issues/74813
> When was the -E <virtualenv> flag removed from pip and why?
Though `pip install --python=... pkg` won't work ( https://github.com/pypa/pip/pull/12068 ),
Now, there's
pip --python=$VIRTUAL_ENV/bin/python install pkg
GSA Eliminates 18F
Amazing how the idea of being able to file tax returns online, without paying for a commercial service to do it, is considered far-left extremism in the US.
You’d think that simplifying tax returns and reducing costs in the taxation system would be something the right could get behind?
no, the right wants you to keep paying Turbotax.
"Musk ally is moving to close office behind free tax filing program at IRS" (2025) https://news.ycombinator.com/item?id=43222216
18F is not the group behind Direct File, and it's very annoying that this narrative keeps getting spread. DF was created by a team of people from USDS, 18F and the IRS. It's housed within the IRS.
Inheriting is becoming nearly as important as working
People really should look at the work of Gary Stevenson here: https://youtu.be/TflnQb9E6lw
The truth is economic growth hasn’t been occurring in real terms for most people for a long time and the rich have been transferring money from the poor to themselves at a dramatic rate.
I’m starting to think the entire system is corrupt and we are headed for a destroyed Europe and a civil war in the US. Maybe I’m very pessimistic but this moment in history feels like the end of the American empire, what comes after this is extremely uncertain but people only seem to demand a fair piece of the wealth after a world war.
From https://news.ycombinator.com/item?id=43140675 :
> Gini Index: https://en.wikipedia.org/wiki/Gini_coefficient
Find 1980 on this chart of wealth inequality in the US:
> GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
Are there additional measures of wealth inequality?
Income distribution: https://en.wikipedia.org/wiki/Income_distribution
World Inequality Database: https://en.wikipedia.org/wiki/World_Inequality_Database
USA!: https://wid.world/country/usa/
Economic inequality: https://en.wikipedia.org/wiki/Economic_inequality :
> Economic inequality is an umbrella term for a) income inequality or distribution of income (how the total sum of money paid to people is distributed among them), b) wealth inequality or distribution of wealth (how the total sum of wealth owned by people is distributed among the owners), and c) consumption inequality (how the total sum of money spent by people is distributed among the spenders).
Musk ally is moving to close office behind free tax filing program at IRS
/? turbotax https://hn.algolia.com/?q=turbotax
"TurboTax’s 20-Year Fight to Stop Americans from Filing Taxes for Free" (2019) https://news.ycombinator.com/item?id=21281411
Hash Denial-of-Service Attack in Multiple QUIC Implementations
No!
> Out of these functions, only SipHash provides significant security guarantees in this context; crafting collisions for the first two can be done fairly efficiently.
You can craft collisions against SipHash also trivially. "significant" is not significant, and using SipHash would also make it much slower, comparable to a linked list search.
The only fix for DOS attacks against hashtables is to fix the collision resolution method. Eg from linear to logarithmic. Or detecting such attacks (ie collision counting). Using a slower, better hash function is fud, because every hash function will produce collisions in hash tables. Even SHA-256
https://github.com/rurban/smhasher/?tab=readme-ov-file#secur...
I know I've looked it up before, but for whatever reason I can't explain how open address hashmaps work; how does it know to check the next bucket on read after a collision on insert?
I know it's a dumb question, and that I've researched it before. Strange.
(I recall Python first reporting the open address hashing hash randomization disclosure vuln that resulted in the PYTHONHASHSEED env var, and then dict becoming an odict OrderedHashMap by default.)
/? The only fix for DOS attacks against hashtables is to fix the collision resolution method. https://www.google.com/search?q=The+only+fix+for+DOS+attacks...
"Defending Hash Tables from Subterfuge with Depth Charge" (2024) https://dl.acm.org/doi/fullHtml/10.1145/3631461.3631550
"Defending hash tables from algorithmic complexity attacks with resource burning" (2024) https://www.sciencedirect.com/science/article/abs/pii/S03043...
Hash collision: https://en.wikipedia.org/wiki/Hash_collision
Electric Propulsion Magnets Ready for Space Tests
How long will it be before this new space propulsion capability can be scaled in order to put the ISS on the moon instead of in the ocean?
Long after it already is. The ISS is aging, and there was every intention of retiring it even before its principal sponsors started a proxy war.
There's certainly zero chance that it could be on the moon. It wasn't designed to survive on a surface. It would not be able to support itself.
If we want something in orbit around the moon, it will still be far cheaper to build a new thing designed for that purpose.
It does seem a shame that it can't be recovered and donated to museums or preserved in some way. I assume that it would be prohibitively expensive/complicated to try to do so, but it's a huge part of the history of space research, and it's a bit of a bummer to just throw it away.
I mean, if Starship works well, it could theoretically retrieve the ISS in a piecemeal fashion.
Orbital refuelling of non-satellite spacecraft at greater than IDK a few km has not been demonstrated.
Rendezvouzing and pushing or pulling a 900K lb (350K kg) object in orbit has never been demonstrated.
In-orbit outer hull repair on a vessel with occupants has also never been demonstrated?
Are there NEO avoidance plans that do not involve fracturing the object into orbital debris on approach?
Violence alters human genes for generations, researchers discover
> The idea that trauma and violence can have repercussions into future generations should help people be more empathetic, help policymakers pay more attention to the problem of violence
This seems like a pretty charitable read on policymakers. We inflict violence all the time that has multigenerational downstream effects without a genetic component and we don’t really care about the human cost, why would adding a genetic component change anything?
Because later generations shouldn't be forced pay the cost for such violence that they didn't perpetrate or perpetuate.
To make it real when there's no compassion, loving kindness, or the golden rule:
War reparations: https://en.wikipedia.org/wiki/War_reparations
I think you may want to re-read parent. They are saying that the reason you gave ( all of which were provided before ) barely restrained human kind from doing what it /was/is/will be doing.
If compassion and war reparations are insufficient to deter unjust violence, what will change the reinforced behaviors?
You have to reach the children and rehabilitate them. This is the type of damage the children are growing up with (HBO documentary from 2004 which I highly recommend people watch, the journalist got fatally shot filming it):
https://www.youtube.com/watch?v=Isa5TRnidnk#t=30m30s
Counting on your fingers the dead. This has to be a rehabilitation effort, because it's just no way for children to talk and be.
Family therapy > Summary of theories and techniques: https://en.wikipedia.org/wiki/Family_therapy#Summary_of_theo...
Expressive therapies: https://en.wikipedia.org/wiki/Expressive_therapies
Systemic therapy: https://en.wikipedia.org/wiki/Systemic_therapy :
> Based largely on the work of anthropologists Gregory Bateson and Margaret Mead, this resulted in a shift towards what is known as "second-order cybernetics" which acknowledges the influence of the subjective observer in any study, essentially applying the principles of cybernetics to cybernetics – examining the examination.
"What are the treatment goals?"
Clean Language: https://en.wikipedia.org/wiki/Clean_language
...
In "Jack Ryan" (TV Series) Season 1 Episode 1, the children suffer from war trauma in their upbringing and that pervades their lives: https://en.wikipedia.org/wiki/Jack_Ryan_(TV_series)#Season_1...
In "Life is Beautiful" (1997) Italian children are trapped in war: https://en.wikipedia.org/wiki/Life_Is_Beautiful
Magneto from X-Men (with the helmet) > Fictional character biography > Early life: https://en.wikipedia.org/wiki/Magneto_(Marvel_Comics)#Early_...
"Chronicles of Narnia" (1939-1949) > Background and conception: https://en.wikipedia.org/wiki/The_Chronicles_of_Narnia#Backg...
War reparations gave us the Nazis, so they clearly don’t work. And compassion has given us everything we have seen thus far in history so we can conclude that too is ineffective.
Reparations certainly deterred further violent fascist statism in post WWII Germany.
Unfortunately the Berlin Wall.
WWI reparations were initially assessed by the Treaty of Versailles (1919) https://en.wikipedia.org/wiki/World_War_I_reparations
Dulles, Dawes Plan > Results: https://en.wikipedia.org/wiki/Dawes_Plan
> Dawes won the 1925 Nobel Prize, WWI reparations obligations were reduced
Then the US was lending them money and steel because it was so bad there, and then we learned they had been building tanks and bombs with our money instead of railroads and peaceful jobs.
Business collaboration with Nazi Germany > British, Swiss, US, Argentinian and Canadian banks: https://en.wikipedia.org/wiki/Business_collaboration_with_Na...
And then the free money rug was pulled out from under them, and then the ethnic group wouldn't sell their paintings to help pay the debts of the war and subsequent central economic mismanagement.
And then they invaded various continents, overextended themselves when they weren't successfully managing their own country's economy, and the Allied powers eventually found the art (and gold) and dropped the bomb developed by various ethnic groups in the desert and that was that.
Except for then WWII reparations: https://en.wikipedia.org/wiki/World_War_II_reparations
The US still occupies or inhabits Germany, which is Russia's neighbor.
Trump was $400 million in debt to Deutsche Bank AG (of Germany and Russia now) and had to underwrite said loan himself due to prior defaults. Nobody but Deutsche Bank would loan Trump (Trump Vodka, University,) money prior to 2016. Also Russian state banks like VEB, they forgot to mention.
Business projects of Donald Trump in Russia > Timeline of Trump business activities related to Russia: https://en.m.wikipedia.org/wiki/Business_projects_of_Donald_...
It looks like - despite attempted bribes of foreign heads of state with free apartment - there will not be a Trump Tower Moscow.
"Biden halts Trump-ordered US troops cuts in Germany" (2021) https://apnews.com/article/joe-biden-donald-trump-military-f...
> art (and gold)
"The Monuments Men" (2014) https://en.wikipedia.org/wiki/The_Monuments_Men
You could argue that’s the trauma expressing itself through derivative real experiences into the future.
It is the purpose of criminal and civil procedure to force parties that caused loss, violence, death and trauma to pay for their offenses.
We have a court system that is supposed to abide Due Process so that there are costs to inflicting trauma (without perpetuating a vicious cycle).
[deleted]
Surgery implants tooth material in eye as scaffolding for lens
/? eye transplant https://hn.algolia.com/?q=eye+transplant
https://scholar.google.com/scholar?q=related:ZlcYhwhYqiUJ:sc...
From "Clinical and Scientific Considerations for Whole Eye Transplantation: An Ophthalmologist's Perspective" (2025) https://tvst.arvojournals.org/article.aspx?articleid=2802568 :
> Whereas advances in gene therapy, neurotrophic factor administration, and electric field stimulation have shown promise in preclinical optic nerve crush injury models, researchers have yet to demonstrate efficacy in optic nerve transection models—a model that more closely mimics WET. Moreover, directing long-distance axon growth past the optic chiasm is still challenging and has only been shown by a handful of approaches. [5–8]
> Another consideration is that even if RGC axons could jump across the severed nerve ending, it would be impossible to guarantee maintenance of the retinal-cortical map. For example, if the left eye were shifted clockwise during nerve coaptation, RGCs in the superior-nasal quadrant of donor retinas would end up synapsing with superior-temporal neurons in the host's geniculate nucleus. This limitation also plagues RGC-specific transplantation approaches; its effect on vision restoration is unknown.
"Combined Whole Eye and Face Transplant: Microsurgical Strategy and 1-Year Clinical Course" (2024) https://pubmed.ncbi.nlm.nih.gov/39250113/ :
> Abstract: [...] Serial electroretinography confirmed retinal responses to light in the transplanted eye. Using structural and functional magnetic resonance imaging, the integrity of the transplanted visual pathways and potential occipital cortical response to light stimulation of the transplanted eye was demonstrated. At 1 year post transplant (postoperative day 366), there was no perception of light in the transplanted eye.
"Technical Feasibility of Whole-eye Vascular Composite Allotransplantation: A Systematic Review" (2023) https://journals.lww.com/prsgo/fulltext/2023/04000/Technical... :
> With nervous coaptation, 82.9% of retinas had positive electroretinogram signals after surgery, indicating functional retinal cells after transplantation. Results on optic nerve function were inconclusive. Ocular-motor functionality was rarely addressed.
How to target NGF(s) to the optic nerve?
Magnets? RF convergence?
How to resect allotransplant and allograft optic nerve tissue?
How to stimulate neuronal growth in general?
Near-infrared stimulates neuronal growth and also there's red light therapy.
Nanotransfection stimulates tissue growth by in-vivo stroma reprogramming.
How to understand the optic nerve portion of the connectome?
The Visual and Auditory cortices are observed to be hierarchical.
Near-field imaging of [optic] nerves better than standard VEP Visual Evoked Potential tests would enable optimization of [optic nerve] transection.
VEP: https://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked...
Ophthalmologic science is important because - while it's possible to fight oxidation and aging - our eyes go.
Upper-atmospheric radiation is terrible on eyes. This could be a job for space medicine, and pilots.
Accomodating IOLs that resist UV damage better than natural tissue: Ocumetics
From "Portable low-field MRI scanners could revolutionize medical imaging" (2023) https://news.ycombinator.com/item?id=34990738 :
> Is MRI-level neuroimaging possible with just NIRS Near-Infrared Spectroscopy?
From "Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35886145 :
> So, to run the same [fMRI, NIRS,] stimulus response activation observation/burn-in again weeks or months later with the same subjects is likely necessary given Representational drift.
"Reversible optical data storage below the diffraction limit (2023)" [at cryogenic temperatures] https://news.ycombinator.com/item?id=38528844 :
> [...] have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined.
Optical tweezers operating below the Abbe diffraction limit are probably of use in resecting neurovascular tissue in the optic nerve (the retina and visual cortex)?
"Real-space nanophotonic field manipulation using non-perturbative light–matter coupling" (2023) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-1-1... :
> "One can write, erase, and rewrite an infinite number of times,"*
"Retinoid restores eye-specific brain responses in mice with retinal degeneration" (2022) https://news.ycombinator.com/item?id=33129531
Fluoxetine increases plasticity in the adult visual cortex; https://news.ycombinator.com/item?id=43079501
Zebrafish can regrow eyes,
From the "What if Eye...?" virtual eyes in a petri dish simulation: https://news.ycombinator.com/item?id=43044958 :
> [ mTor in Axolotls, ]
"Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration" (2023) https://neurosciencenews.com/vision-restoration-genetic-2318... :
> “What’s interesting is that these Müller cells are known to reactivate and regenerate retina in fish,” she said. “But in mammals, including humans, they don’t normally do so, not after injury or disease. And we don’t yet fully understand why.”
/? Regenerative medicine for ophthalmologic applications
Medical treatments devised for war can quickly be implemented in US hospitals
> Our research, and that of others, found that too much oxygen can actually be harmful. Excess oxygen triggers oxidative stress – an overload of unstable molecules called free radicals that can damage healthy cells. That can lead to more inflammation, slower healing and even organ failure.
> In short, while oxygen is essential, more isn’t always better. [...]
> We discovered that severely injured patients often require less oxygen than previously believed. In fact, little or no supplemental oxygen is needed to safely care for 95% of these patients
Oxidative stress: https://en.wikipedia.org/wiki/Oxidative_stress
Antioxidant > Levels in food: https://en.wikipedia.org/wiki/Antioxidant#Levels_in_food
Anthocyanin antioxidants: https://www.google.com/search?q=anthocyanin+antioxidants
Deep sea divers know about oxygen toxicity;
Trimix (breathing gas) https://en.wikipedia.org/wiki/Trimix_(breathing_gas) :
> With a mixture of three gases it is possible to create mixes suitable for different depths or purposes by adjusting the proportions of each gas. Oxygen content can be optimised for the depth to limit the risk of toxicity, and the inert component balanced between nitrogen (which is cheap but narcotic) and helium (which is not narcotic and reduces work of breathing, but is more expensive and can increase heat loss).
> The mixture of helium and oxygen with a 0% nitrogen content is generally known as heliox. This is frequently used as a breathing gas in deep commercial diving operations, where it is often recycled to save the expensive helium component. Analysis of two-component gases is much simpler than three-component gases.
HFNC (High-Flow Nasal Cannula) breathing tube therapy is recommended by various medical guidelines.
HFNC and prone positioning is one treatment protocol for COVID and ARDS Acute Respiratory Distress Syndrome: you put them on their stomach and give them a breathing tube (instead of a ventilator on their backs).
Which treatment protocols and guidelines should be updated given these findings?
For which conditions is HFNC therapy advisable given these findings?
Heated humidified high-flow therapy: https://en.wikipedia.org/wiki/Heated_humidified_high-flow_th...
Netboot Windows 11 with iSCSI and iPXE
Hey Terin! Nice post!
I also netboot Windows this way! To run a 20 machines in my house off the same base disk image, which we use for LAN parties. I have code and an extensive guide on GitHub:
https://github.com/kentonv/lanparty
It looks like you actually figured out something I failed at, though: installing Windows directly over iSCSI from the start. I instead installed to a local device, and then transferred the disk image to the server. I knew that building a WinPE environment with the right network drivers would probably help here, but I got frustrated trying to use the WinPE tools, which seemed to require learning a lot of obscure CLI commands (ironically, being Windows...).
You observed some slowness using Windows over the network. I did too, when I was doing it with 1G LAN, but I've found on 10G it pretty much feels the same as local.
BTW, a frustrating thing: The Windows 10->11 updater also seemingly fails to include network drivers and so you can't just upgrade over iSCSI. I'm still stuck on Windows 10 so I'm going to have to reinstall everything from scratch sometime this year. Maybe I'll follow your guide to use WinPE this time.
Hey Kenton!
I figured you had done something similar with the LAN Party House. If I hadn't figured it out I was going to ask/look for your setup.
> You observed some slowness using Windows over the network.
Mini-ITX makes it a bit difficult to upgrade to 10GbE (only one PCIe slot!), and the slowness isn't bad enough in-game to deal with upgrading it just yet.
> BTW, a frustrating thing: The Windows 10->11 updater also seemingly fails to include network drivers and so you can't just upgrade over iSCSI.
I've read (and also observed, now) that if you install directly on iSCSI Windows doesn't make the recovery partition. This evidently also breaks 10->11 upgrades.
God bless you for this sir. I've been wanting to get Windows iscsi boot working, but there's always one more thing. Did you get anything else fun working with ipxe? all the exampled online seem so outdated.
If you get (i)pxe running, you can chain to https://netboot.xyz/ which lets you boot lots of open source stuff.
It's a bit of a mixed bag, because pxe environments have a way of not always being useful. On bios boot, there's tools from isolinux to memory load disk images and hook the bios calls... but if your OS of choice doesn't use bios calls for storage, it needs a driver that can find the disk image in memory.
For uefi boot, there's not a good way to do this, supposedly some uefi environments can load disk images from the network, but afaik, it's not something you can do from ipxe. Instead, for UEFI, the netboot.xyz folks have some other approaches; typically fetching the kernel and initrd separately or otherwise repackaging things rather than using official ISO images.
And I've run into lots of cases where while pxe seems to work, maybe the keyboard doesn't work in pxe, or something else doesn't get properly initialized and you end up having a better time if you give up and boot from USB.
System Rescue CD and Clonezilla are PXE-bootable.
"OneFileLinux: A 20MB Alpine metadistro that fits into the ESP" https://news.ycombinator.com/item?id=40915199 :
> Ventoy, signed EFIstubs, USI, UKI
TIL about https://netboot.xyz/
Elephant in the room: Quantum computers will destroy Bitcoin
Someone had to say it. Maybe the current drop is normies finally waking up and realizing that extrapolated accelerating developments in quantum computers will break encryption used in Bitcoin within 5 years.
It's also extremely naive to assume it will be easy to transfer a massive decentralized project to a post-quantum algorithm. Maybe new cryptos will be invented, but Bitcoin will not "retain value".
Things that will retain value if the entire internet is broken due to rapid deployment of quantum computers will be:
- Real estate
- Physical assets (gold, silver, etc)
- Physical stock certificates (printed on actual paper)
- Paper money
Since internet, cards, finance may just stop functioning one day as quantum computers break all encryption.
Feel free to prove me wrong.
There is a pending hard fork to PQ Post Quantum algorithms for all classical blockchains.
There will likely be different character lengths for account addresses and keys, so all of the DNS+HTTP web services and HTTP web forms built on top will need different form validation.
Vitalik Buterin presented on this subject a few years ago. Doubling key sizes may or may not be sufficient to limit the risk of quantum attacks on elliptical curve encryption algorithms employed by Bitcoin and many other DLTs.
The Chromium browser now supports the ML-KEM (Kyber) PQ cipher.
Very few web servers have PQ ciphers enabled. It is as simple as changing a text configuration file to specify a different cipher on the webserver, once the ciphers are tested by time and money.
There are patched versions of OpenSSH server, for example, but PQ support is not yet merged in core there yet either.
There are PQ ciphers and there are PQ cryptographic hashes.
There are already PQ-resistant blockchains.
Should Bitcoin hard fork to double key sizes or to implement a PQ cipher and hash?
Spelunking for Bitcoin by generating all possible keys and checking their account balances is not prevented by PQ algorithms.
Banking and Finance and Critical Infrastructure also need to upgrade to PQ ciphers. Like mining rigs, it is unlikely that existing devices can be upgraded with PQ software; we will need to buy new devices and recycle existing non-PQ devices.
If banks are on a 5 year IT refresh cycle, that means they need to be planning to upgrade everything to PQ 5 years or more before a QC quantum computer of a sufficient number of error-corrected qubits is online for adversaries that steal cryptoassets from people on the internet.
> There is a pending hard fork to PQ Post Quantum algorithms for all classical blockchains.
Where is this pending HF for BTC?
How can all of this work realistically without constant cat & mouse catch up game?
> Spelunking for Bitcoin by generating all possible keys and checking their account balances is not prevented by PQ algorithms.
So there will be no protection against this? There is no protection possible?
Is there a PR yet, and also a BIP?
Other DLTs also have numbered document procedures for managing soft forks, hard forks, and changes to cipher and hash algorithms.
Litecoin, for example, is Bitcoin with the scrypt hash algorithm instead of SHA256.
Proof-of-Work mining rigs implement hashing algorithms in hardware as ASICs and FPGAs.
Stellar and Ripplenet validation servers still implement same in software FWIU?
Are there already ASICs for any of the PQ Hash and Cipher algorithms?
SSL Termination is expensive but necessary for HTTPS Everywhere and now HTTP STS Strict-Transport-Security headers and the HSTS preload list.
Are there still ASIC or FPGA SSL accelerator cards that need to implement PQ ciphers and hashes?
Multisig and similar m:n smart contracts support requiring more keys for a transaction to complete.
Rather than in an account with one sk/pk pair, funds can be stored in escrow such that various conditions must be met before a transaction can move the funds.
Running a full node helps the network by keeping another copy online and synchronized. A full node can optionally also index by transaction id (with LevelDB).
The block and transaction messages can be logged and monitored. Though it doesn't cost anything to check a balance given authorized or forged keys.
Spending the money in a bitcoin account discloses the public key, whereas only the double hash of the pubkey is necessary to send money to an account or scriptHash.
The market does not appear to cost infosec value, risk, or technical debt into cryptoasset prices.
PQ or non-PQ does not predict asset price in 2025-02.
Breach disclosures apparently hardly affect asset prices; which is unfortunate if we want them to limit their and our taxable losses.
An Experimental Study of Bitmap Compression vs. Inverted List Compression
ScholarlyArticle: "An Experimental Study of Bitmap Compression vs. Inverted List Compression" (2017) https://dl.acm.org/doi/10.1145/3035918.3064007
Inverted index > Compression: https://en.wikipedia.org/wiki/Inverted_index#Compression :
> For historical reasons, inverted list compression and bitmap compression were developed as separate lines of research, and only later were recognized as solving essentially the same problem. [7]
> and only later were recognized as solving essentially the same problem. [7]
"Hard problems that reduce to document ranking" https://news.ycombinator.com/item?id=43174910#43175540
Ctrl-F "zoo" https://westurner.github.io/hnlog/#comment-36839925 #:~:text=zoo :
> Complexity Zoo, Quantum Algorithm Zoo, Neural Network Zoo
Programming Language Zoo https://plzoo.andrej.com/
IBM completes acquisition of HashiCorp
Hashicorp blog post: https://www.hashicorp.com/en/blog/hashicorp-officially-joins...
Jeff Bezos' revamp of 'Washington Post' opinions leads editor to quit
Here's a post linking to a BBC article about same that was flagged and censored here: https://news.ycombinator.com/item?id=43191562 ;
> the newspaper's opinion section will focus on supporting “personal liberties and free markets",
Free trade! Fair trade!
> and pieces opposing those views will not be published.
Boo, fascist corporate oligarchical censorship!
You might say, "actually that's not fascism, Bob" because fascism is when the government exercises control over the non-government-held corporations.
Fascism is like domming Apple's DEI policies when without Congress you can't make law, or telling people they should buy TikTok for you, with your name on all the checks.
Fascism: https://en.wikipedia.org/wiki/Fascism
I think the argument is that since the US has legal and well-established corruption there's no difference. Meaning, a billionaire is easily more influential in legislation than a legislator or even multiple legislators, so he is now inseparable from government and is an agent of the government.
At least they've disclosed their intent to impose editorial bias on the opinion section. It doesn't say "Fair and Balanced."
From "Fed to ban policymakers from owning individual stocks" (2021) https://news.ycombinator.com/item?id=28951646 :
> "Blind Trust" > "Use by US government officials to avoid conflicts of interest" https://en.wikipedia.org/wiki/Blind_trust :
>> The US federal government recognizes the "qualified blind trust" (QBT), as defined by the Ethics in Government Act and related regulations.[1] In order for a blind trust to be a QBT, the trustee must not be affiliated with, associated with, related to, or subject to the control or influence of the government official.
>> Because the assets initially placed in the QBT are known to the government official (who is both creator and beneficiary of the trust), these assets continue to pose a potential conflict of interest until they have been sold (or reduced to a value less than $1,000). New assets purchased by the trustee will not be disclosed to the government official, so they will not pose a conflict.
The Ethics in Government Act which created OGE was passed by Congress in 1978 in response to Watergate: https://en.wikipedia.org/wiki/Ethics_in_Government_Act
Should Type Theory (HoTT) Replace (ZFC) Set Theory as the Foundation of Math?
In a practical sense, hasn't type theory already replaced ZFC in the foundations of math? Lean is what working mathematicians currently use to formally prove theorems, and Lean is based on type theory rather than ZFC.
But HoTT is removed from lean core?
From https://news.ycombinator.com/item?id=42440016#42444882 :
> /? Hott in lean4 https://www.google.com/search?q=hott+in+lean4
https://github.com/forked-from-1kasper/ground_zero :
> Lean 4 HoTT Library
"Should Type Theory Replace Set Theory as the Foundation of Mathematics?" (2023) https://link.springer.com/article/10.1007/s10516-023-09676-0
Show HN: Probly – Spreadsheets, Python, and AI in the browser
Probly was built to reduce context-switching between spreadsheet applications, Python notebooks, and AI tools. It’s a simple spreadsheet that lets you talk to your data. Need pandas analysis? Just ask in plain English, and the code runs right in your browser. Want a chart? Just ask.
While there are tools available in this space like TheBricks, Probly is a minimalist, open-source solution built with React, TypeScript, Next.js, Handsontable, Hyperformula, Apache Echarts, OpenAI, and Pyodide. It's still a work in progress, but it's already useful for my daily tasks.
TIL that Apache Echarts can generate WAI-ARIA accessible textual descriptions for charts and supports WebGL. https://echarts.apache.org/en/feature.html#aria
apache/echarts: https://github.com/apache/echarts
Marimo notebook has functionality like rxpy and ipyflow to auto-reexecute input cell dependencies fwiu: https://news.ycombinator.com/item?id=41404681#41406570 .. https://github.com/marimo-team/marimo/releases/tag/0.8.4 :
> With this release, it's now possible to create standalone notebook files that have package requirements embedded in them as a comment, using PEP 723's inline metadata
marimo-team/marimo: https://github.com/marimo-team/marimo
ipywidgets is another way to build event-based UIs in otherwise Reproducible notebooks.
datasette-lite doesn't yet work with jupyterlite and emscripten-forge yet FWIU; but does build SQLite in WASM with pyodide. https://github.com/simonw/datasette-lite
pygwalker: https://github.com/Kanaries/pygwalker .. https://news.ycombinator.com/item?id=35895899
How do you record manual interactions with ui controls and spreadsheet grids to code for reproducibility?
> "Generate code from GUI interactions; State restoration & Undo" https://github.com/Kanaries/pygwalker/issues/90
> The Scientific Method is testing, so testing (tests, assertions, fixtures) should be core to any scientific workflow system.
ipytest has a %%ipytest cell magic to run functions that start with test_ and subclasses of unittest.TestCase with the pytest test runner. https://github.com/chmp/ipytest
How can test functions with assertions be written with Probly?
Probly doesn't have built-in test assertion functionality yet, but since it runs Python (via Pyodide) directly in the browser, you can write test functions with assertions in your Python code. The execute_python_code tool in our system can run any valid Python code, including test functions.
This is something we're considering for future development, so this is a great shout!
To have tests that can be copied or exported into a .py module from a notebook is advantageous for prototyping and reusability.
There are exploratory/discovery and explanatory forms and workflows for notebooks.
A typical notebook workflow: get it working with Ctrl-Enter and manually checking output, wrap it in a function(s) with defined variable scopes and few module/notebook globals, write a test function for the function which checks the output every time, write markdown and/or docstrings, and then what of this can be reused from regular modules.
nbdev has an 'export a notebook input cell to a .py module' feature. And formatted docstrings like sphinx apidoc but in notebooks. IPython has `%psource module.py` for pygments-style syntax highlighting of external .py modules and `%psave output.py` for saving an input cell to a file, but there are not yet IPython magics to read from or write to certain lines within a file like nbdev.
To run the chmp/ipytest %%ipytest cell magic with line or branch coverage, it's necessary to `%pip install ipytest pytest-cov` (or `%conda install ipytest pytest-cov`)
jupyter-xeus supports environment.yml with jupyterlite with packages from emscripten-forge: https://jupyterlite-xeus.readthedocs.io/en/latest/environmen...
emscripten-forge src: https://github.com/emscripten-forge/recipes/tree/main/recipe... .. web: https://repo.mamba.pm/emscripten-forge
A Systematic Review of Quantum Computing in Finance and Blockchains
"From Portfolio Optimization to Quantum Blockchain and Security: A Systematic Review of Quantum Computing in Finance" (2023) https://arxiv.org/abs/2307.01155
The FFT Strikes Back: An Efficient Alternative to Self-Attention
Google introduced this idea in 2022 with "FNet: Mixing Tokens with Fourier Transforms" [0].
Later they found out that, performance of their TPU(s) for matrix multiplication was faster than FFT in the most scenarios.
That seems like an odd comparison, specialty hardware is often better, right?
Hey, do DSPs have special hardware to help with FFTs? (I’m actually asking, this isn’t a rhetorical question, I haven’t used one of the things but it seems like it could vaguely be helpful).
(Discrete) Fast Fourier Transform implementations:
https://fftw.org/ ; FFTW: https://en.wikipedia.org/wiki/FFTW
gh topic: fftw: https://github.com/topics/fftw
xtensor-stack/xtensor-fftw is similar to numpy.fft: https://github.com/xtensor-stack/xtensor-fftw
Nvidia CuFFTW, and/amd-fftw, Intel MKL FFTW
NVIDIA CuFFT (GPU FFT) https://docs.nvidia.com/cuda/cufft/index.html
ROCm/rocFFT (GPU FFT) https://github.com/ROCm/rocFFT .. docs: https://rocm.docs.amd.com/projects/rocFFT/en/latest/
AMD FFT, Intel FFT: https://www.google.com/search?q=AMD+FFT , https://www.google.com/search?q=Intel+FFT
project-gemmi/benchmarking-fft: https://github.com/project-gemmi/benchmarking-fft
"An FFT Accelerator Using Deeply-coupled RISC-V Instruction Set Extension for Arbitrary Number of Points" (2023) https://ieeexplore.ieee.org/document/10265722 :
> with data loading from either specially designed vector registers (V-mode) or RAM off-the-core (R-mode). The evaluation shows the proposed FFT acceleration scheme achieves a performance gain of 118 times in V-mode and 6.5 times in R-mode respectively, with only 16% power consumption required as compared to the vanilla NutShell RISC-V microprocessor
"CSIFA: A Configurable SRAM-based In-Memory FFT Accelerator" (2024) https://ieeexplore.ieee.org/abstract/document/10631146
/? dsp hardware FFT: https://www.google.com/search?q=dsp+hardware+fft
Ask HN: Who's been picking up trade deals due to US tariff threats?
Trump claimed tariffs are necessary due to the drug war wartime authorizations and threatened our immediate neighbors and BRICS with tariffs this year.
Which countries are winning trade and tech deals while the US is forcing itself to pay tariffs on imports to collectively punish others without due process?
Example: China is now buying Canadian oil (instead of Russian oil at a discount due to sanctions)
/? canada tariffs: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
"Trump tariffs inspire economic patriotism in Canada" https://news.ycombinator.com/item?id=42922237
FWIU they have regressed from free trade all the way to tariffs on Canada at 25% on everything but oil, which is at 10%?
And they uncancelled and re-approved the Keystone XL tar sands midwest plains liability.
As informal as it might seem listing places where the US might start losing out on trade, given the current political situation, the last thing needed is any simple list that might fuel narcissistic outrage when it might threaten the golden one's genius plan, especially later on when (if?) it doesn't pan out out and a new round of allocating blame and punishment is engaged. One can not expect such people to innately understand why some reactions occur.
Just look at Musk's outburst after he burnt tw-tter and the long term advertisers figured the people they were selling to would probably have moved on to other platforms and likewise pulled advertising from the new mess.
All we (the rest of the world) can do is hope that the initial tariffs imposed on trade will be enough to look like they're having a win.
"EU and Mexico revive stalled trade deal as Trump tariffs loom" https://www.reuters.com/world/eu-mexico-revive-stalled-trade...
FWIU the President of Mexico told Trump that they would be more concerned about migrants crossing their northern border if the US were more concerned about arms trafficked to Mexico.
That's sort of funny but then again IMO, the biggest difference to immigration rate and would be something the US doesn't seem to have - a fixed rate of minimum pay regardless of whether people are US born citizens or not.
Basic income experiments here have not succeeded FWIU; https://hn.algolia.com/?q=basic+income
They did not pay for the relief checks signed with his personal brand and the record relief loan fraud by cutting taxes; so that's still not paid off either.
Recipe for national debt: increase expenses and cut revenue.
"Starve the beast" said the Reaganites who increased taxes on the middle class and increased foreign war expenditures and future obligations: https://en.wikipedia.org/wiki/Starve_the_beast
"Where do people get that Reagan raised taxes 11 times? I don’t completely understand this." https://www.quora.com/Where-do-people-get-that-Reagan-raised...
"The Mostly Forgotten Tax Increases of 1982-1993" https://www.bloomberg.com/view/articles/2017-12-15/the-mostl...
Total Tax Receipts as a % of GDP would tell the story; but they also increased the debt limit 18 times.
"Federal Receipts as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/FYFRGDA188S
"Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
Combined chart: https://fred.stlouisfed.org/graph/?g=1DTZx
History of the US debt ceiling: https://en.wikipedia.org/wiki/History_of_the_United_States_d...
I don't think that debt financing onto future generations and starting conflicts that have since cost trillions made America great.
Would improving conditions in their home country change the immigration rate? But won't there still be climate refugees? It's hot, there's no water, there's no work.
Just watched an alarmed presentation on U-3 and U-6 as indicators of unemployment rate: U-6 includes underemployment at less than 25K/yr.
Ah sorry for not being to the point as I forget just how many systems there are globally - I meant minimum wage payments.
The effect would not be immediate and would take years, though a tax break for businesses for each US citizen on their payroll would surely help speed it up. I have known too many US based people who moan about the influx but happy to have jobs done by illegals at a really dirt cheap rate.
As as additional note, Australia had this fair wage pay as law for a great number of years, but somewhere along the line people plain forgot. Back in the 70s the country used to have Aussie based workers that travelled the entire country, north to south and back again to follow work as seasonal workers in agriculture. However the process for hiring an employee was a handful and red tape most DIY accounting folks didn't like dealing with so much, meant the option of simplified payment steam to a company providing labour was and still is very attractive, even though some of the procedures has been relaxed. (Also until a decade or two ago, for years the employer would be paying into a black hole fund for worker's compensation, even if their crop was wiped out or for some other reason would not be needing any additional workers.) In the 90s I got to witness all sorts of BS schemes to get non aussie workers into the work force, like 457 visas so companies could fill jobs that they couldn't fill locally. Eventually everyone and their dog cottoned on to the shenanigans and ... for the most part a lot went by the way side. We still have noise here in Aussie land how backpackers save this and that because Aussies are lazy and what not - nope, most here growing up in the heat know working in conditions too hot or some other unsafe instance have long term consequences, might be 5 years down the track and the kidney damage finally reaches a point it needs to be addressed. Backpackers (tourists with a work visa who travel and work their way around a country) are typically long gone back in their home land, so they're ideal for scam labour companies and the odd no moral compass farmer.
Andrew Yang (and Sam Altman) probably have a piece to say about basic income / wage subsidies in context to the robots and sophisticated AI taking our jobs and tiny houses.
Migrant labor exploitation? What would our food cost? How many people does it take to mow a sand lawn?
I hope that the current hate for immigrants isn't much more than divisive political fearmongering and splitting that will diminish when they regain their humility and self respect due to food prices and having a conscience.
Creating jobs on the other side of the border would probably be more cost-effective.
IDK, "Terminator: Dark Fate", "The Big Green", "Amistad", "McFarland, USA"
H1B competition in tech is fierce here, and the reason we don't get hired in our own country.
Indiar and Chinar have more gifted students than we have students.
Americans won't work the fields anymore; we'll pay Asia to manufacture robots and learn sustainable farming practices like no-till farming later.
The new sustainable aviation fuel subsidy (of the previous administration) requires sustainable farming practices so that we're not subsidizing unrealized losses due soil depletion. It doesn't make sense to pay them to waste the land without consideration.
Your success with fighting desertification is inspiring. We're still not sure why the Colorado river is running dry for the southwest where it's always been dry and carelessly farmed and drilled. TIL about bunds and preventing water from flashing off due to impacted soil trashed by years of tilling and overgrazing.
I've a few friends who've traveled to Oceania for school and work.
There should be an Australian "The Office".
Welfare economics: https://en.wikipedia.org/wiki/Welfare_economics
I would never consider a fair minimum wage which is regardless of citizen or non citizenship as a subsidy.
As for supporting less developed countries -- around 1975 the formation of the Lima agreement[1] came to pass, a United Nations strategy to shift some manufacturing to less developed countries and share the wealth. I suspect that's yet another reason why some US companies expanded into South American countries.
As for A.I. bots taking over our simple repetitive jobs or manual jobs like in construction duties, due to complexity and unexpected job functions it's not something that's going to be worth pursing too hard. Construction tasks were the sort of jobs I was referencing that some people liked getting for rock bottom dollar by way of desperate immigrants and though people do greatly benefit from really low rates when its their own money and mostly DIY projects, a bot replacement is a novel idea but ... A.I. General Practice doctor bot will exist long (years) before that, data for medicating and solving health issues has a much much larger data set existing over at least 50 years of information which most is still sort of relevant. As for construction - well obviously one meets plenty of people think it's fairly simple and a few lines in a DIY book has it covered. Yes the minimum wage would mean the nature and scope of some people's personal (back yard) projects would change. Small business would obviously be impacted but a smart govt would have in place something to balance out having to pay higher wages. Over all it's better people can work if they want to.
Food prices generally should not have a high labour wage component when it leaves the farm gate - unless it's a back yard hobby situation. If a whooping big farm is crying foul labour minimum wage rates are too high, so high they can't make a profit, 9/10 times they're doing something wrong or ... they got /(are getting) fleeced when they sold / sell their produce to the markets. The act of fleecing a farmer or farm group is IMO, immoral.
Actually (given the level of interest by a few people I've met or know) I would think a lot of people would love to work in the fields, so long as good farming practices are observed ie safe and not half killing themselves so by days end they are done for the day, they are paid at decent and liveable wage, (ie not slave labour rates) and there's time for themselves with perhaps hour travel to local affordable further / higher education they can participate in part time if they are inclined to do so.
We also hear the same -- in that Aussies won't or don't want to work on a farm because [insert some BS excuse here] and it's fortunate there are backpackers (young tourists) who are prepared ... if the job is straight, pays ok, half decent conditions - there will be no end of willing home borne applicants.
As for sustainable aviation fuel subsidy - I would guess for most people it implies a focus on more obvious oil seed crops on otherwise good agricultural land which could be utilised for food or fibre production. But I think the trick is to work out what oil producing crops could exist in marginal country, moreover a tree crop where it's feasible to drip irrigate or under tree root so that the very limited water isn't lost evaporating from the soil surface.
As for tackling desertification, sustainability is a new mantra in the farming sector. I don't iirc any significant projects aimed at revegetating any large tracks of arid land I'd call a success. (There are though property owners who have done wonders to rehabilitate flogged country so it becomes more productive whilst ensuing a diverse landscape.) However for fighting desertification, I do recall 90s or so a very good example of the benefits of Permaculture as per Bill Mollison [2] rehabilitating a small knob / small hill of ground somewhere iirc in Africa back into a scene of lush greenery which might have even included some banana trees, whilst the rest of the landscape was bleak and dry with a minimum of grassy vegetation.
>There should be an Australian "The Office".
Never really got into either versions of The Office however something in the same vein but a little over the top is a workplace based Aussie show -- Swift and Shift Couriers[3]
[1] https://www.unido.org/sites/default/files/2014-04/Lima_Decla... [pdf]
I looked it up:
Wage subsidy: https://en.wikipedia.org/wiki/Wage_subsidy :
> A wage subsidy is a payment to workers by the state, made either directly or through their employers. Its purposes are to redistribute income and to obviate the welfare trap attributed to other forms of relief, thereby reducing unemployment. It is most naturally implemented as a modification to the income tax system.
Permaculture and Bill Mollison: perpetual spinach and Swiss Chard do well here. TIL there are videos about perennials and about companion planting. Potatoes in the soil below tomatoes, in food safe HDPE 5 gallon buckets with a patch cut into the side. But that's still plastic in the garden. Three Sisters: Corn, Beans, Squash.
Import substitution industrialization > Latin America,: https://en.wikipedia.org/wiki/Import_substitution_industrial...
"The Biggest Mapping Mistake of All Time" re: Baja and the Sea of Cortez: https://youtube.com/watch?v=Hcq9_Tw2eTE&
Confessions of an Economic Hit Man is derided as untrue. See also links to "War is a Racket", which concludes with chapter "5. To Hell With War" https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit...
Structural adjustment > Criticisms: https://en.wikipedia.org/wiki/Structural_adjustment
Golden rule: https://en.wikipedia.org/wiki/Golden_Rule
The "Farming Simulator" game has the user operate multiple labor positions on the farm as concurrently as they can master.
We can't do similar backpackers-style field labor because there aren't travel visas that are also work visas FWIU.
Volunteering while on a student visa is fine AFAIU. Non-commercial open source software development is also something that folks enjoying their travels can do for no money here.
Laser weeding is still fairly underutilized. There's not yet an organic standard for herb. Nicotine is an approved "organic* pesticide, for example.
Here we have Arbor Day Foundation, which will ship 10 free saplings for an easy annual tree survey.
TIL trees can be shipped domestically in a trianglar cardboard carton, in a plastic bag, with a damp sheet of paper around their roots.
It may be the work of Greening Australia that I've heard about: https://en.wikipedia.org/wiki/Greening_Australia
One of a number of videos about planting trees on YouTube: "Australia's Remarkable Desert Regreening Project Success" https://youtube.com/watch?v=HRo0RG02Mg0&
I've heard that China has changed their approach to fighting desertification in the Gobi desert; toward a healthy diversity of undergrowth instead of a monoculture of trees; ecology instead of just planting rows of trees that won't last by comparison.
Half Moon Bunds look like they are working to regreen the Sahel. Bunds would require different farm machines than rototilling tractors that cause erosion.
"Mastering the Art of Half Moon Bunds" https://youtube.com/watch?v=XyH6dFlv9dk&
Andrew Million's light board videos are a great resource, as a software dev with no real experience in hydrological engineering for sustainable agriculture.
"Inside Africa's Food Forest Mega-Project" https://youtube.com/watch?v=xbBdIG--b58&
"Flight of the Conchords" (HBO) and "Eagle vs Shark" are from NZ.
From YouTube playlists: Derek from Veritasium, The Slow Mo Guys, The Engineering Mindset, Morello, Jade from upandatom, Sarah from Chuck, and Synthony are all from AU as I recall.
"Be Kind Rewind" (2008) https://en.wikipedia.org/wiki/Be_Kind_Rewind
The basic Minimum wage [1] is a wage which doesn't need to be subsidised. Only when as I described eliminating dirt poor / poverty line pay rates from a region that had relied on it, would any govt perhaps need to offer a subsidy to employers to swallow the bitter pill of paying the fair rate to all workers. In Australia employers are not entitled to any payment or entitlement from govt or other entities to continue to employ a long term employee earning an existing wage in their employment. Some big companies in the past have had a bit of a whine and gone for a grab for getting some sort of a handout but -- after the last major abuse where the govt helped a couple poor companies out and got burnt when the sods moved overseas or restructured effectively ending the jobs at risk that were the bargaining chip in the first instance ...
I enjoy meeting the few backpackers that visit my region and I've no issue with them working, but for a very long time I've informed any that I crossed paths with, that they are entitled to the same pay rate as anyone else here, ie don't get scammed -- iirc, all of them had been getting scammed one way or another.
I dislike (very understated) the majority of labour hire companies that were around, that arrange the on farm work as they have in the past, actively discriminated against regular aussies looking for a job - since the aussie is more often likely to know what is right and wrong, what is not acceptable and thus the labour hirer's long running scam would soon come under threat - took time but now govt here is looking at any bad contractors, but we still hear noise pieces chirping that aussies are lazy ... and ...
I am surprised in regard to no US visa that addresses backpakers -- I'd thought in the free western world that the small fraction of tourists that wanted to backpack their way across the country, could do so if they wanted that option and meet the relevant criteria. As I understand, there are generally rules such as the prospective backpacker is not actually flat broke.
Trying to find a near to close no mechanical removal of weeds from a crop, that is cheap and sustainable is somewhat the holy grail of farming. Seems like simple might win out at the end though small robotic platforms that traverse the field ... existing models thus far I've seen are solar powered / electric, which detect, identify and then mechanically upset / remove the weed as it slowly travels its target paddock.
As for a massive project aimed at greening the Great Australian Desert [2] I'd not heard about it but I do not mean to imply it's not happening, Australia is actually a pretty big place. However the cover photo in the link suggests not desert proper but very marginal flat once cleared and felled, that after initial cropping flogged the soils, all it could be is poor farming country that is best rehabilitated and re-vegetated with trees and shrubs.
There was actually a lot of conservation / rehabilitation work carried out in 1920's once it became very clear that the farming practices of the time did not suit the geology and climate of this country.
I think as years progress here in Australia will see different farming systems come into play like the Alley cropping system.[3]
[1] https://en.wikipedia.org/wiki/Minimum_wage
[2] https://lifeboat.com/blog/2022/09/how-australia-is-regreenin...
[3] https://www.agrifarming.in/alley-cropping-system-functions-o...
Edit to fix links
Reaganomics > Policies: https://en.wikipedia.org/wiki/Reaganomics#Policies :
> The 1982 tax increase undid a third of the initial tax cut. In 1983 Reagan instituted a payroll tax increase on Social Security and Medicare hospital insurance. [25] In 1984 another bill was introduced that closed tax loopholes. According to tax historian Joseph Thorndike, the bills of 1982 and 1984 "constituted the biggest tax increase ever enacted during peacetime". [26]
Also,
>> said the Reaganites who increased taxes on the middle class and increased foreign war expenditures and future obligations:
To clarify, Reagan reduced taxes for the wealthiest the most, thus shifting the effective tax burden to the middle class and increasing inequality.
"Changes in poverty, income inequality, and the standard of living in the United States during the Reagan years" https://pubmed.ncbi.nlm.nih.gov/8500951/#:~:text=The%20rate%... :
> The rate of poverty at the end of Reagan's term was the same as in 1980. Cutbacks in income transfers during the Reagan years helped increase both poverty and inequality. Changes in tax policy helped increase inequality but reduced poverty.
Gini Index: https://en.wikipedia.org/wiki/Gini_coefficient
GINI Index for the United States: https://fred.stlouisfed.org/series/SIPOVGINIUSA
"Make America Great Again": https://en.wikipedia.org/wiki/Make_America_Great_Again
At that time - during the 1970s and 1980s - Federal Debt as a percentage of GDP was much lower than it is today; 30-53% in the 1980s and 120% of GDP in 2024 :
> Federal Debt: Total Public Debt as Percent of Gross Domestic Product" https://fred.stlouisfed.org/series/GFDEGDQ188S
The need for memory safety standards
> Looking forward, we're also seeing exciting and promising developments in hardware. Technologies like ARM's Memory Tagging Extension (MTE) and the Capability Hardware Enhanced RISC Instructions (CHERI) architecture offer a complementary defense, particularly for existing code.
IIRC there's some way that a Python C extension can accidentally disable the NX bit for the whole process.. https://news.ycombinator.com/item?id=40474510#40486181 :
>>> IIRC, with CPython the NX bit doesn't work when any imported C extension has nested functions / trampolines
>> How should CPython support the mseal() syscall? [which was merged in Linux kernel 6.10]
> We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process. In addition, as outlined in our Secure by Design whitepaper and in our memory safety strategy, we are deeply committed to building security into the foundation of our products and services.
> That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.
Secureblue; https://github.com/secureblue/Trivalent has hardened_malloc.
Memory safety notes and Wikipedia concept URIs: https://news.ycombinator.com/item?id=33563857
...
A graded memory safety standard is one aspect of security.
> Tailor memory safety requirements based on need: The framework should establish different levels of safety assurance, akin to SLSA levels, recognizing that different applications have different security needs and cost constraints. Similarly, we likely need distinct guidance for developing new systems and improving existing codebases. For instance, we probably do not need every single piece of code to be formally proven. This allows for tailored security, ensuring appropriate levels of memory safety for various contexts.
> Enable objective assessment: The framework should define clear criteria and potentially metrics for assessing memory safety and compliance with a given level of assurance. The goal would be to objectively compare the memory safety assurance of different software components or systems, much like we assess energy efficiency today. This will move us beyond subjective claims and towards objective and comparable security properties across products.
Piezoelectric Catalyst Destroys Forever Chemicals
> The system’s energy requirements vary depending on the wastewater’s conditions, such as contaminant concentration, water matrix, or the customer’s discharge requirements. In one of Oxyle’s full-scale units that treats 10 cubic meters per hour, energy consumption measured less than 1 kilowatt-hour per cubic meter, according to the company.
> Using the flow of water, rather than electricity, to activate the reaction makes the method far more energy efficient than other approaches, says Mushtaq.
Different approach with no energy/volume (kWhr/m^3) metric in the abstract:
"Photocatalytic C–F bond activation in small molecules and polyfluoroalkyl substances" (2024) https://www.nature.com/articles/s41586-024-08327-7 .. https://news.ycombinator.com/item?id=42444729
History of interactive theorem proving [pdf]
The most recent reference listed in this paper is "Truly modular (co)datatypes for Isabelle/HOL" (2014) .. https://scholar.google.com/scholar?cites=1837719396691256842...
Automated theorem proving > Related problems: https://en.wikipedia.org/wiki/Automated_theorem_proving#Rela...
awesome-code-llm > Coding for Reasoning: https://github.com/codefuse-ai/Awesome-Code-LLM#31-coding-fo...
New type of microscopy based on quantum sensors
"Optical widefield nuclear magnetic resonance microscopy" (2025) https://www.nature.com/articles/s41467-024-55003-5 :
> Crucially, each camera pixel records an NMR spectrum providing multicomponent information about the signal’s amplitude, phase, local magnetic field strengths, and gradients.
Show HN: jsonblog-schema – a JSON schema for making your blog from one file
JSON-LD or YAML-LD can be stored in the frontmatter in Markdown documents;
Schema.org is an RDFS schema with Classes and Properties:
https://schema.org/BlogPosting
Syntax examples can be found below the list of Properties on a "JSON-LD" tabs
The JSON schema for schema.org in lexiq-legal/pydantic_schemaorg aren't yet rebuilt for pydantic v2 FWIU; https://github.com/lexiq-legal/pydantic_schemaorg
W3C SHACL Shapes and Constraints Language is the Linked Data schema valuation spec which is an alternative to JSON schema, of which there are many implementations.
How core Git developers configure Git
skwp/git-workflows-book > .gitconfig appendix: https://github.com/skwp/git-workflows-book?tab=readme-ov-fil... :
[alias]
unstage = reset HEAD # remove files from index (tracking)
uncommit = reset --soft HEAD^ # go back before last commit, with files in uncommitted state
https://learngitbranching.js.org/charmbracelet/git-lfs-transfer: https://github.com/charmbracelet/git-lfs-transfer
jj-vcs/jj: https://github.com/jj-vcs/jj
Ggwave: Tiny Data-over-Sound Library
The acoustic modem is back in style [1]! And, of course, same frequencies (DTMF) [2], too!
DTMF has a special place in the phone signal chain (signal at these frequencies must be preserved, end to end, for dialing and menu selection), but I wonder if there's something more efficient, using the "full" voice spectrum, with the various vocoders [3] in mind? Although, it would be much crepier than hearing some tones.
[1] Touch tone based data communication, 1979: https://www.tinaja.com/ebooks/tvtcb.pdf
[2] touch tone frequency mapping: https://en.wikipedia.org/wiki/DTMF
[3] optimized encoders/decoders for human speech: https://vocal.com/voip/voip-vocoders/
"Using the Web Audio API to Make a Modem" (2017) https://news.ycombinator.com/item?id=15471723
Missouri woman pleads guilty to federal charge in plot to sell Graceland
Graceland > Tourist destination: https://en.wikipedia.org/wiki/Graceland
Category:Films about Elvis Presley: https://en.wikipedia.org/wiki/Category:Films_about_Elvis_Pre...
Trump Plans to Liquidate Public Lands to Finance Sovereign Wealth Fund
Once an SWF has accumulated wealth, that wealth is invested in stocks, bonds, real estate, and other financial instruments to earn even more money.
Any suggestions on what I should invest in now knowing that a US government fund is imminent?Bitcoin? TSLA? $TRUMP? DOGE?
Or do we think that it's going to pump everything in the world?
Social Security Trust Fund: https://en.wikipedia.org/wiki/Social_Security_Trust_Fund :
> The Trust Fund is required by law to be invested in non-marketable securities issued and guaranteed by the "full faith and credit" of the federal government. These securities earn a market rate of interest.[5]
Government bond > United States: https://en.wikipedia.org/wiki/Government_bond#United_States
> [...] One projection scenario estimates that, by 2035, the Trust Fund could be exhausted. Thereafter, payroll taxes are projected to only cover approximately 83% of program obligations. [7]
> There have been various proposals to address this shortfall, including: reducing government expenditures, such as by raising the retirement age; tax increases; investment diversification [8] and, borrowing.
Robots, automation, AI, K12 Q12 STEM training, basic research, applied science
What should social security be invested in?
First, should social security ever be privatized? No, because they would steal it all in fees compared to HODL'ing Index Funds and commodities (as a hedge against inflation).
What should social security be invested in?
Low-risk investments in the interest of all of the people who plan to rely upon OASDI for retirement and accidental disability (OASDI: OA "Old Age", S "Survivors", D "Disability", I "Insurance")
What should a sovereign wealth fund be invested in?
We don't do "soverign wealth funds" because we don't have a monarchy in the United States; we have a temp servant leader President role and we have Congress and they work together to prepare the budget.
The President of the United States has limited budgetary discretionary authority by design. OMB (Office of Management and Budget) must work with CBO (Congressional Budget Office) to prepare the budget.
US court upholds Theranos founder Elizabeth Holmes's conviction
I don't quite understand the reasoning for putting her in prison.
Yes, she deserves to be punished, but surely house arrest, community service etc makes more sense for a crime of this nature, rather than using tax payer money to house her for 9 years when she isn't a credible threat to society.
House arrest would make the math on “should I try fraud”lean heavily towards fraud I think.
Maybe even more so if you’ve got a nice house.
Fraud: https://en.wikipedia.org/wiki/Fraud
US Sentencing Commission > Fraud: https://www.ussc.gov/topic/fraud
What would deter fraud?
"Here’s a look inside Donald Trump’s $355 million civil fraud verdict" (2024) https://apnews.com/article/trump-fraud-letitia-james-new-yor...
"Trump hush money verdict: Guilty of all 34 counts" .. "Guilty: Trump becomes first former US president convicted of felony crimes" (2024) https://apnews.com/article/trump-trial-deliberations-jury-te...
"Trump mistakes [EJC] for [ex-wife]. #Shorts" https://youtube.com/shorts/0tq3rh6bh_8 .. https://youtu.be/lonTBp9h7Fo?si=77DIJMrpBRgLcsMK
I don’t know what that’s supposed to mean regarding the topic.
> House arrest would make the math on “should I try fraud” lean heavily towards fraud I think.
You argue that house arrest is an insufficient deterrent for the level of fraud committed by defendant A.
Is the sentencing for defendant A consistent with the US Sentencing Guidelines, and consistent with other defendants convicted of fraud?
> Maybe even more so if you’ve got a nice house.
Defendant B apparently isn't even on house arrest, and apparently sent someone else to their civil rape deposition obstructively and fraudulently.
The fact that different cases play out differently, some possibly unwise and unjust is no surprise to me.
"Lock her up!" He shouted about her. https://youtu.be/wS_Nrz5dNeU?si=XasLFHXQygx7IgSw ... /? lock her up: https://www.youtube.com/results?sp=mAEA&search_query=Lock+he...
Hard problems that reduce to document ranking
Ranking (information retrieval) https://en.wikipedia.org/wiki/Ranking_(information_retrieval...
awesome-generative-information-retrieval > Re-ranking: https://github.com/gabriben/awesome-generative-information-r...
Nixon's Revolutionary Vision for American Governance (2017)
/? nixon https://hn.algolia.com/?q=nixon ...
"New evidence that Nixon sabotaged 1968 Vietnam peace deal" (2017) https://news.ycombinator.com/item?id=13296696
(Watergate (1972-1974): https://en.wikipedia.org/wiki/Watergate_scandal )
History repeats itself!
Eisenhower, JFK, LBJ, Ford, Nixon, Carter, Reagan
https://news.ycombinator.com/item?id=42547326 ..Iran hostage crisis (1980-1981) https://en.wikipedia.org/wiki/Iran_hostage_crisis :
> The hostages were formally released into United States custody the day after the signing of the Algiers Accords, just minutes after American President Ronald Reagan was sworn into office
1980 October Surprise theory: https://en.wikipedia.org/wiki/1980_October_Surprise_theory
The Wrongs of Thomas More
Hey, Thomas More!
"The Saint" (1997) https://en.wikipedia.org/wiki/The_Saint_(1997_film) :
> Using the alias "Thomas More", Simon poses as
Larry Ellison's half-billion-dollar quest to change farming
Notes for Lanai island from an amateur:
Hemp is useful for soil remediation because it's so absorbent; which is part of why testing is important.
Is there already a composting business?
Do the schools etc. already compost waste food?
"Show HN: We open-sourced our compost monitoring tech" https://news.ycombinator.com/item?id=42201207
Canadian greenhouses? Chinese-Mongolian-Canadian greenhouses are wing shaped and set against a berm;
"Passive Solar Greenhouse Technology From China?" https://youtube.com/watch?v=FOgyK6Jieq0&
Transparent wood requires extracting the lignin.
Transparent aluminum requires a production process, too.
There are polycarbonate hurricane panels.
Reflective material on one wall of the wallipini greenhouse (and geothermal) is enough to grow citrus fruit through the winter in Alliance, Nebraska. https://news.ycombinator.com/item?id=39927538
Glass allows more wavelengths of light through than plastic or recyclable polycarbonate; including UV-C, which is sanitizing
Hydrogen peroxide cleans out fish tanks FWIU.
Various plastics are food safe, but not when they've been in the sun all day.
To make aircrete, you add soap bubbles to concrete with an air compressor.
/? aircrete dome build in HI and ground anchors
Catalan masonry vault roofs (in Spain, Italy, Mexico, Arizona,) are strong, don't require temporary arches, and passively cool most efficiently when they have an oculus to let the heat rise out of the dome to openable vents to the wind.
U.S. state and territory temperature extremes: https://en.wikipedia.org/wiki/U.S._state_and_territory_tempe... :
> Hawaii: 15 °F (−9.4 °C) to 100 °F (37.8 °C)
> Nebraska: -47 °F (−43.9 °C) to 118 °F (47.8 °C)
"140-year-old ocean heat tech could supply islands with limitless energy" https://news.ycombinator.com/item?id=38222695 :
> OTEC: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio...
North Carolina is ranked 4th in solar, has solar farms, and has hurricanes (and totally round homes on stilts). FWIU there are hurricane-rated solar panels, flexible racks, ground mounts.
https://insideclimatenews.org/news/20092018/hurricane-floren...
"Sargablock: Bricks from Seaweed" https://news.ycombinator.com/item?id=37188180
"Turning pineapple skins into soap and other cleaning products" https://www.businessinsider.com/turning-pineapple-skins-into...
"Costa Rica Let a Juice Company Dump Their Orange Peels in the Forest—and It Helped" https://www.smithsonianmag.com/innovation/costa-rica-let-jui... https://www.sciencealert.com/how-12-000-tonnes-of-dumped-ora...
Akira Miyawaki and the Miyawaki method of forest cultivation for reforestation to fight desertification by regreening: https://en.wikipedia.org/wiki/Akira_Miyawaki :
> Using the concept of potential natural vegetation, Miyawaki developed, tested, and refined a method of ecological engineering today known as the Miyawaki method to restore native forests from seeds of native trees on very degraded soils that were deforested and without humus. With the results of his experiments, he restored protective forests in over 1,300 sites in Japan and various tropical countries, in particular in the Pacific region[8] in the form of shelterbelts, woodlands, and woodlots, including urban, port, and industrial areas. Miyawaki demonstrated that rapid restoration of forest cover and soil was possible by using a selection of pioneer and secondary indigenous species that were densely planted and provided with mycorrhiza.
Mycorrhiza spores can be seeded into soil to support root network development.
/? Mycorrhiza spore kit: https://www.google.com/search?q=Mycorrhiza+spore+kit
> Miyawaki studied local plant ecology and used species that have key and complementary roles in the normal tree community.
It also works in small patches; self-sufficient "mini forest"
/? Miyawaki method before after [video search] https://www.google.com/search?q=miyawaki%20method%20before%2...
Brewing tea removes lead from water
/? tea bag microplastic: https://www.google.com/search?q=tea+bag+microplastic
There are glass and silver tea infusers.
"Repurposed beer yeast encapsulated in hydrogels may offer a cost-effective way to remove lead from water" https://phys.org/news/2024-05-repurposed-beer-yeast-encapsul...
"Yeast-laden hydrogel capsules for scalable trace lead removal from water" (2024) https://pubs.rsc.org/en/content/articlelanding/2024/su/d4su0...
.
"Application of brewing waste as biosorbent for the removal of metallic ions present in groundwater and surface waters from coal regions" (2018) https://www.sciencedirect.com/science/article/abs/pii/S22133...
.
https://ethz.ch/en/news-and-events/eth-news/news/2024/03/tur... :
> Protein fibril sponges made by ETH Zurich researchers [from whey protein] are hugely effective at recovering gold from electronic waste.
Aqueous-based recycling of perovskite photovoltaics
They say they use green solvents, but in the list of materials at the bottom I see Lead Iodide and Cesium Iodide, which doesn't strike me as too green of a thing to use.
Also, the abstract doesn't really make it clear whether the recycling is for complete panels or the "filling" of panels (sorry for the layman's terms). And whether this applies to any Perovskite-utilizing panel or just certain kinds.
But I am generally heartened at seeing thought regarding industrial processes which consider the end of productive life as well. I wish more products were designed to eventually be taken apart and reduced using standard processes, or even per-product processes - rather than the assumption being that more and more stuff is chucked into landfills.
Perovskite solar panels contain lead so you are going to have lead around, the question is if you can recycle the lead or if it goes in the environment.
Well there is always alternative number 3 - dont use perovskites.
Organic solar cell: https://en.wikipedia.org/wiki/Organic_solar_cell :
> 19.3%
Perovskite solar cell: https://en.wikipedia.org/wiki/Perovskite_solar_cell :
> 29.8%
If you need to use ~1.5x more area, but you avoid leaking any heavy metals or similarly toxic materials - I would say that's a win.
Of course, there are other parameters to consider.
Show HN: Benchmarking VLMs vs. Traditional OCR
Vision models have been gaining popularity as a replacement for traditional OCR. Especially with Gemini 2.0 becoming cost competitive with the cloud platforms.
We've been continuously evaluating different models since we released the Zerox package last year (https://github.com/getomni-ai/zerox). And we wanted to put some numbers behind it. So we’re open sourcing our internal OCR benchmark + evaluation datasets.
Full writeup + data explorer here: https://getomni.ai/ocr-benchmark
Github: https://github.com/getomni-ai/benchmark
Huggingface: https://huggingface.co/datasets/getomni-ai/ocr-benchmark
Couple notes on the methodology:
1. We are using JSON accuracy as our primary metric. The end goal is to evaluate how well each OCR provider can prepare the data for LLM ingestion.
2. This methodology differs from a lot of OCR benchmarks, because it doesn't rely on text similarity. We believe text similarity measurements are heavily biased towards the exact layout of the ground truth text, and penalize correct OCR that has slight layout differences.
3. Every document goes Image => OCR => Predicted JSON. And we compare the predicted JSON against the annotated ground truth JSON. The VLMs are capable of Image => JSON directly, we are primarily trying to measure OCR accuracy here. Planning to release a separate report on direct JSON accuracy next week.
This is a continuous work in progress! There are at least 10 additional providers we plan to add to the list.
The next big roadmap items are: - Comparing OCR vs. direct extraction. Early results here show a slight accuracy improvement, but it’s highly variable on page length.
- A multilingual comparison. Right now the evaluation data is english only.
- A breakdown of the data by type (best model for handwriting, tables, charts, photos, etc.)
Harmonic Loss converges more efficiently on MNIST OCR: https://github.com/KindXiaoming/grow-crystals .. "Harmonic Loss Trains Interpretable AI Models" (2025) https://news.ycombinator.com/item?id=42941954
Making any integer with four 2s
> I've read about this story in Graham Farmelo's book The Strangest Man: The Hidden Life of Paul Dirac, Quantum Genius.
"The Strangest Man": https://en.wikipedia.org/wiki/The_Strangest_Man
Four Fours: https://en.wikipedia.org/wiki/Four_fours :
> Four fours is a mathematical puzzle, the goal of which is to find the simplest mathematical expression for every whole number from 0 to some maximum, using only common mathematical symbols and the digit four. No other digit is allowed. Most versions of the puzzle require that each expression have exactly four fours, but some variations require that each expression have some minimum number of fours.
"Golden ratio base is a non-integer positional numeral system" (2023) https://news.ycombinator.com/item?id=37969716 :
> What about radix epi*i, or just e?"
The foundations of America's prosperity are being dismantled
> They warn that dismantling the behind-the-scenes scientific research programs that backstop American life could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the public’s access to next-generation consumer technologies. The US took nearly a century to craft its rich scientific ecosystem; if the unraveling that has taken place over the past month continues, Americans will feel the effects for decades to come.
Flu deaths surpass Covid deaths nationwide for first time
"CDC terminates flu vaccine promotion campaign" (2025-02) https://news.ycombinator.com/item?id=43126704
General Reasoning: Free, open resource for building large reasoning models
"Can Large Language Models Emulate Judicial Decision-Making? [Paper]" (2025) https://news.ycombinator.com/item?id=42927611 ; awesome-legal-nlp, LexGLUE, FairLex, LegalBench, "Who hath done it?" exercise : {Thing done}, ({Gdo, You, Others, Unknown/Nobody} x {Ignorance, Malice, Motive, Intent}) ... Did nobody do this?
Can LLMs apply a consistent procedure for logic puzzles with logically disjunctive possibilities?
Enter: Philosoraptor the LLM
Show HN: Slime OS – An open-source app launcher for RP2040 based devices
Hey all - this is the software part of my cyberdeck, called the Slimedeck Zero.
The Slimedeck Zero is based around this somewhat esoteric device called the PicoVision which is a super cool RP2040 (Raspberry Pi Pico) based device. It outputs relatively high-res video over HDMI while still being super fast to boot with low power consumption.
The PicoVision actually uses two RP2040 - one as a CPU and one as a GPU. This gives the CPU plenty of cycles to run bigger apps (and a heavy python stack) and lets the GPU handle some of the rendering and the complex timing HDMI requires. You can do this same thing on a single RP2040, but we get a lot of extra headroom with this double setup.
The other unique thing about the PicoVision is it has a physical double-buffer - two PSRAM chips which you manually swap between the CPU and GPU. This removes any possibility of screen tearing since you always know the buffer your CPU is writing to is not being used to generate the on-screen image.
For my cyberdeck, I took a PicoVision, hacked a QWERTY keyboard from a smart TV remote, added an expansion port, and hooked it all up to a big 5" 800x480 screen (interlaced up from 400x240 internal resolution).
I did a whole Slimedeck Zero build video ( https://www.youtube.com/watch?v=rnwPmoWMGqk ) over on my channel but I really hope Slime OS can have a life of it's own and fit onto multiple form-factors with an ecosystem of apps.
I've tried to make it easy and fun to write apps for. There's still a lot broken / missing / tbd but it's enough of a base that, personally, it already sparks that "programming is fun again" vibe so hopefully some other folks can enjoy it!
Right now it only runs on the PicoVision but there's no reason it couldn't run on RP2350s or other hardware - but for now I'm more interested in adding more input types (we're limited to the i2c TV remote keyboard I hacked together) and fleshing out the internal APIs so they're stable enough to make apps for it!
Multiple buffering; https://en.wikipedia.org/wiki/Multiple_buffering
Wikipedia has "page flipping" but not "physical double-buffer"? TIL about triple buffering, and quad buffering for stereoscopic applications.
Sodium-ion EV battery breakthrough pushes performance to theoretical limits
> “The binder we chose, carbon nanotubes, facilitates the mixing of TAQ [bis-tetraaminobenzoquinone] crystallites and carbon black particles, leading to a homogeneous electrode,” explained Chen.
"High-Energy, High-Power Sodium-Ion Batteries from a Layered Organic Cathode" (2025) https://pubs.acs.org/doi/10.1021/jacs.4c17713 :
> It exhibits a high theoretical capacity of 355 mAh/g per formula unit, enabled by a four-electron redox process, and achieves an electrode-level energy density of 606 Wh/kg (90 wt % active material) along with excellent cycling stability,
3.5M Voters Were Purged During 2024 Presidential Election [video]
Aren't there ZK (Zero Knowledge proof) blockchains that allow people to privately check whether their vote was counted correctly?
How can homomorphic encryption help detect and prevent illegal vote suppression?
law and politics are soft problems. correctness and math are meaningless here.
...and the interview talks about specific and named individuals using kkk tactics to block people from reaching the pools, not faceless "they" stealing votes in deep government. So im not even sure where your comment comes from.
Show HN: ArXiv-txt, LLM-friendly ArXiv papers
Just change arxiv.org to arxiv-txt.org in the URL to get the paper info in markdown
Example:
Original URL: https://arxiv.org/abs/1706.03762
Change to: https://arxiv-txt.org/abs/1706.03762
To fetch the raw text directly, use https://arxiv-txt.org/raw/abs/1706.03762, this will be particularly useful for APIs and agents
If you train an LLM on only formally verified code, it should not be expected to generate formally verified code.
Similarly, if you train an LLM on only published ScholarlyArticles ['s abstracts], it should not be expected to generate publishable or true text.
Traceability for Retraction would be necessary to prevent lossy feedback.
The Raspberry Pi RP2040 Gets a Surprise Speed Boost, Unlocks an Official 200MHz
"Ensuring Accountability for All Agencies" – Executive Order
Show HN: Subtrace – Wireshark for Docker Containers
Hey HN, we built Subtrace (https://subtrace.dev) to let you see all incoming and outgoing requests in your backend server—like Wireshark, but for Docker containers. It comes with a Chrome DevTools-like interface. Check out this video: https://www.youtube.com/watch?v=OsGa6ZwVxdA, and see our docs for examples: https://docs.subtrace.dev.
Subtrace lets you see every request with full payload, headers, status code, and latency details. Tools like Sentry and OpenTelemetry often leave out these crucial details, making prod debugging slow and annoying. Most of the time, all I want to see are the headers and JSON payload of real backend requests, but it's impossible to do that in today's tools without excessive logging, which just makes everything slower and more annoying.
Subtrace shows you every backend request flowing through your system. You can use simple filters to search for the requests you care about and inspect their details.
Internally, Subtrace intercepts all network-related Linux syscalls using Seccomp BPF so that it can act as a proxy for all incoming and outgoing TCP connections. It then parses HTTP requests out of the proxied TCP stream and sends them to the browser over WebSocket. The Chrome DevTools Network tab is already ubiquitous for viewing HTTP requests in the frontend, so we repurposed it to work in the browser like any other app (we were surprised that it's just a bunch of TypeScript).
Setup is just one command for any Linux program written in any language.
You can use Subtrace by adding a `subtrace run` prefix to your backend server startup command. No signup required. Try for yourself: https://docs.subtrace.dev
stratoshark, the docker container part of wireshark, may be a better match for that description.
I'd probably use a postman related pitch instead. This is much closer to that and looks like a nice complement to that workflow
Stratoshark: https://wiki.wireshark.org/Stratoshark :
> Stratoshark captures and analyzes system calls and logs using libsinsp and libscap, and can share capture files with the Sysdig command line tool and Falco
Show HN: Scripton – Python IDE with built-in realtime visualizations
Hey HN, Scripton (https://scripton.dev) is a Python IDE built for fast, interactive visualizations and exploratory programming — without the constraints of notebooks.
Why another Python IDE? Scripton hopes to fill a gap in the Python development ecosystem by being an IDE that:
1. Focuses on easy, fast, and interactive visualizations (and exposes rich JS plotting libraries like Observable Plot and Plotly directly to Python) 2. Provides a tightly integrated REPL for rapid prototyping and exploration 3. Is script-centric (as opposed to, say, notebook-style)
A historical detour for why these 3 features: Not so long ago (ok, well, maybe over a decade ago...), the go-to environment for many researchers in scientific fields would have been something like MATLAB. Generating multiple simultaneous visualizations (potentially dynamic) directly from your scripts, rapidly prototyping in the REPL, all without giving up on writing regular scripts. Over time, many switched over to Python but there wasn't an equivalent environment offering similar capabilities. IPython/Jupyter notebooks eventually became the de facto replacement. And while notebooks are great for many things (indeed, it wasn't uncommon for folks to switch between MATLAB and Mathematica Notebooks), they do make certain trade-offs that prevent them from being a full substitute.
Inner workings:
- Implemented in C++ (IDE <-> Python IPC), Python, TypeScript (UI), WGSL (WebGPU-based visualizations)
- While the editor component is based off Monaco, the IDE is not a vscode fork and was written from scratch. Happy to chat about the trade-offs if anyone's interested
- Uses a custom Python debugger written from scratch (which enables features like visualizing intermediate outputs while paused in the debugger)
Scripton's under active development (currently only available for macOS but Linux and Windows support is planned). Would love for you to try it out and share your thoughts! Since this is HN, I’m also happy to chat about its internals.
I am a robotics engineer/scientist and I do shit ton of visualization of all kind of high-fidelity/high-rate data, often in a streaming setting - time series at a few thousand Hz, RGB/depth images from multiple cameras, debugging my models by visualizing many layer outputs, every augmentation, etc.
For a long time, I had my own observability suite - a messy library of python scripts that I use for visualizing data. I replaced all of them with rerun (https://rerun.io/) and if you are someone who think Scipton is exciting, you should def try rerun too!
I use cursor/vscode for my development and add a line or two to my usual workflows in python, and rerun pops up in it's own window. It's a simple pip installable library, and just works. It's open source, and the founders run a very active forum too.
Edit: One slightly related tid-bit that might be interesting to HN folks. rerun isn't that old, and is in active development, with some breaking changes and new features that come up every month. And it means that LLM are pretty bad at rerun code gen, beyond the simple boilerplate. Recently, it kind of made my life hell as all of my interns refuse to use docs and try using LLMs for rerun code generation and come to me with a messy code spaghetti. It's both sad and hilarious. To make my life easier, I asked rerun folks to create and host machine readable docs somewhere and they never got to it. So I just scrape their docs into a markdown file and ask my interns to paste the docs in their prompt before they query LLMs and it works like a charm now.
For magnet levitation project I am dumping data to a csv on a rpi and then reading it over ssh onto matplotlib on my desktop. It works but it choppy. Probably because of the ssh.
Could I drop rerun into this to improve my monitoring?
From "Show HN: We open-sourced our [rpi CSV] compost monitoring tech" https://news.ycombinator.com/item?id=42201207 :
Nagios, Collectd, [Prometheus], Grafana
From "Preview of Explore Logs, a new way to browse your logs without writing LogQL" https://news.ycombinator.com/item?id=39981805 :
> Grafana supports SQL, PromQL, InfluxQL, and LogQL.
From https://news.ycombinator.com/item?id=40164993 :
> But that's not a GUI, that's notebooks. For Jupyter integration, TIL pyqtgraph has jupyter_rfb, Remote Frame Buffer: https://github.com/vispy/jupyter_rfb
pyqtgraph: https://github.com/pyqtgraph/pyqtgraph
Matplotlib can generate GIF and WEBM (and .mp4) animations, but not realtime.
ManimCE might work in notebooks, but IDK about realtime
Genesis is fast enough by FPS for faster than realtime 3D with Python (LuisaRender) and a GPU: https://github.com/Genesis-Embodied-AI/Genesis
SWE-Lancer: a benchmark of freelance software engineering tasks from Upwork
> By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.
What could be costed in an upwork or a mechanical turk task Value?
Task Centrality or Blockingness estimation: precedence edges, tsort topological sort, graph metrics like centrality
Task Complexity estimation: story points, planning poker, relative local complexity scales
Task Value estimation: cost/benefit analysis, marginal revenue
Nuclear fusion: WEST beats the world record for plasma duration
> 1337 seconds
AFAIU, no existing tokamaks can handle sustained plasma for any significant period of time because they'll burn down.
Did this destroy the facility?
What duration of sustained fusion plasma can tokamaks like EAST, WEST, and ITER withstand? What will need to change for continuous fusion energy to be net gained from a tokamak or a stellerator fusion reactor?
If this destroyed the facility that would be the headline this news article.... WEST highest is 22 minutes (it's in the title) and you could google EAST and ITER but the title tells you it is less than 22 minutes. WEST is a testing ground for ITER. The fact that you can have sustained fusion for only 22 minutes is the biggest problem since you need to boil water continuously because all power sources rely on taking cold water and making it warm constantly so that it makes a turbine move.
there is destroyed and then there is a smoking hole in the side of the planet:) but I think it fair to say, that after 22 min running, that there is no way that it can be turned back on later kind of thing, fairly sure its a pwhew!, lookatdat!, almost lost plasma containment.... keep in mind that they are trying to replicate the conditions found inside a star with some magnets and stuff, sure its ferociously engineered stuff but not at all like the stuff that could exist inside a star so all in all a rather audacious endevour, and I wish them luck with it
The system is not breakeven and the plasma was contained for 22 minutes so the situation would be the plasma was contained until it ran out of fuel. It is made out of tungsten for heat dissipation, has active cooling, has magnetic confinement with superconductors to prevent the system from destroying itself. https://en.wikipedia.org/wiki/WEST_(formerly_Tore_Supra)
Fusion energy gain factor: https://en.wikipedia.org/wiki/Fusion_energy_gain_factor :
> A fusion energy gain factor, usually expressed with the symbol Q, is the ratio of fusion power produced in a nuclear fusion reactor to the power required to maintain the plasma in steady state
To rephrase the question: what is the limit to the duration of sustained inertial confinement fusion plasma in the EAST, WEST, and ITER tokamaks, and why is the limit that amount of time?
Don't those materials melt if exposed to temperatures hotter than the sun for sufficient or excessive periods of time?
For what sustained plasma duration will EAST, WEST, and ITER need to be redesigned? 1 hour, 24 hours?
EAST, WEST, URNER
[LLNL] "US scientists achieve net energy gain for second time in nuclear fusion reaction" (2023-08) https://www.theguardian.com/environment/2023/aug/06/us-scien...
But IDK if they've seen recent thing about water 100X'ing proton laser plasma beams from SLAC published this year/month;
From https://news.ycombinator.com/item?id=43088886 :
> "Innovative target design leads to surprising discovery in laser-plasma acceleration" (2025-02) https://phys.org/news/2025-02-discovery-laser-plasma.html
>> Compared to similar experiments with solid targets, the water sheet reduced the proton beam's divergence by an order of magnitude and increased the beam's efficiency by a factor of 100
"Stable laser-acceleration of high-flux proton beams with plasma collimation" (2025) https://www.nature.com/articles/s41467-025-56248-4
Timeline of nuclear fusion: https://en.wikipedia.org/wiki/Timeline_of_nuclear_fusion
That energy gain was only in the plasma, not in the entire system.
The extremely low efficiency of the lasers used there for converting electrical energy into light energy (perhaps of the order of 1%) has not been considered in the computation of that "energy gain".
Many other hidden energy sinks have also not been considered, like the energy required to produce deuterium and tritium, or the efficiencies of capturing the thermal energy released by the reaction and of converting it into electrical energy.
It is likely that the energy gain in the plasma must be at least in the range 100 to 1000, in order to achieve an overall energy gain greater than 1.
> all power sources rely on taking cold water and making it warm constantly so that it makes a turbine move.
PV (photovoltaic), TPV (thermopohotovoltaic), and thin film and other solid-state thermoelectric (TE) approaches do not rely upon corrosive water turning a turbine.
Turbine blades can be made of materials that are more resistant to corrosion.
On turbine efficiency:
"How the gas turbine conquered the electric power industry" https://news.ycombinator.com/context?id=38314774
It looks like the GE 7HA gas/hydrogen turbine is still the most efficient turbine? https://gasturbineworld.com/ge-7ha-03-gas-turbine/ :
> Higher efficiency: 43.3% in simple cycle and up to 64% in combined cycle,
Steam turbines aren't as efficient as gas turbines FWIU.
/? which nuclear reactors do not have a steam turbine:
"How can nuclear reactors work without steam?" [in space] https://www.reddit.com/r/askscience/comments/7ojhr8/how_can_... :
> 5% efficient; you usually get less than 5% of the thermal energy converted into electricity
(International space law prohibits putting nuclear reactors in space without specific international approval, which is considered for e.g. deep space probes like Voyager; though the sun is exempt.)
Rankine cycle (steam) https://en.wikipedia.org/wiki/Rankine_cycle
Thermoelectric effect: https://en.wikipedia.org/wiki/Thermoelectric_effect :
> The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck effect (temperature differences cause electromotive forces), the Peltier effect (thermocouples create temperature differences), and the Thomson effect (the Seebeck coefficient varies with temperature).
"Thermophotovoltaic efficiency of 40%" https://www.nature.com/articles/s41586-022-04473-y
Multi-junction PV cells are not limited by the Shockley–Queisser limit, but are limited by current production methods.
Multi-junction solar cells: https://en.wikipedia.org/wiki/Multi-junction_solar_cell#Mult...
Which existing thermoelectric or thermopohotovoltaic approaches work with nuclear fusion levels of heat (infrared)?
Okay so I meant to say the simplest way is to heat water in this situation. But there are alternatives here https://en.wikipedia.org/wiki/Fusion_power?wprov=sfti1#Tripl...
I wouldn't have looked this up otherwise.
Maybe solar energy storage makes sense for storing the energy from fusion reactor stars, too.
There's also MOST: Molecular Solar Thermal Energy Storage, which stores solar energy as chemical energy for up to 18 years with a "specially designed molecule of carbon, hydrogen and nitrogen that changes shape when it comes into contact with sunlight."
"Chip-scale solar thermal electrical power generation" (2022) https://doi.org/10.1016/j.xcrp.2022.100789
> Multi-junction PV cells are not limited by the Shockley–Queisser limit, but are limited by current production methods.
Such as multilayer nanolithography, which nanoimprint lithography 10Xs; https://arstechnica.com/reviews/2024/01/canon-plans-to-disru...
Perhaps multilayer junction PV and TPV cells could be cost-effectively manufactured with nanoimprint lithography.
Qualys Security Advisory: MitM and DoS attacks against OpenSSH client and server
MitM-able since 6.8 (December 2014) only if
> VerifyHostKeyDNS is "yes" or "ask" (it is "no" by default),
And DOS-able since 9.5 (2023) because of a new ping command.
> To confirm our suspicion, we adopted a dual strategy:
> - we manually audited all of OpenSSH's functions that use "goto", for missing resets of their return value;
> - we wrote a CodeQL query that automatically searches for functions that "goto out" without resetting their return value in the corresponding "if" code block.
Catalytic computing taps the full power of a full hard drive
So the trick is to do the computation forwards, but take care to only use reversible operations, store the result outside of the auxiliary "full" memory and then run the computation backwards, reversing all instructions and thus undoing their effect on the auxiliary space.
Which is called catalytic, because it wouldn't be able to do the computation in the amount of clean space it has, but can do it by temporarily mutating auxiliary space and then restoring it.
What I haven't yet figured out is how to do reversible instructions on auxiliary space. You can mutate a value depending on your input, but how do you use that value, since you can't assume anything about the contents of the auxiliary space and just overwriting with a constant (e.g. 0) is not reversible.
Maybe there is some xor like trick, where you can store two values in the same space and you can restore them, as long as you know one of the values.
Edit: After delving into the paper linked in another comment, which is rather mathy (or computer sciency in the original meaning of the phrase), I'd like to have a simple example of a program that can not run in it's amount of free space and actually needs to utilize the auxiliary space.
That sounds similar to this in QC:
From "Reversible computing escapes the lab" (2025) https://news.ycombinator.com/item?id=42660606#42705562 :
> FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
> "The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123
Though also Landauer's limit presumably only applies to electrons; not photons or phonons or gravitational waves.
Can you feed gravitationaly waves forward to a receptor that only carries that gravitational waves bit, but can be recalled?
Setting up a trusted, self-signed SSL/TLS certificate authority in Linux
There is just one thing missing from this. Name Constraints.
This doesn't get brought up enough but a Name Constraint on a root cert lets you limit where the root cert can be signed to. So instead of this cert being able to impersonate any website on the internet, you ratchet it down to just the domain (or single website) that you want to sign for.
https://github.com/caddyserver/caddy/issues/5759 :
> When generating a CA cert via caddy and putting that in the trust store, those private keys can also forge certificates for any other domain.
RFC5280 (2008) "Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile" > Section 4.2.1.10 Name Constraints: https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... :
> The name constraints extension, which MUST be used only in a CA certificate, indicates a name space within which all subject names in subsequent certificates in a certification path MUST be located. Restrictions apply to the subject distinguished name and apply to subject alternative names. Restrictions apply only when the specified name form is present. If no name of the type is in the certificate, the certificate is acceptable.
> Name constraints are not applied to self-issued certificates (unless the certificate is the final certificate in the path). (This could prevent CAs that use name constraints from employing self-issued certificates to implement key rollover.)
If this is now finally supported that's great. The issue was that for it to be useful it has to be marked critical / fail-closed, because a CA with ignored name constraint == an unrestricted CA. But if you make it critical, then clients who don't understand it will just fail. You can see how this doesn't help adoption.
It says "Proposed Standard" on the RFC; maybe that's why it's not widely implemented if that's the case?
https://bettertls.com/ has Name Constraints implementation validation tests, but "Archived Results" doesn't seem to have recent versions of SSL clients listed?
nameConstraints=critical,
DNS Certification Authority Authorization: https://en.wikipedia.org/wiki/DNS_Certification_Authority_Au... :> Registrants publish a "CAA" Domain Name System (DNS) resource record which compliant certificate authorities check for before issuing digital certificates.
And hopefully they require DNSSEC signatures and DoH/DoT/DoQ when querying for CAA records.
Name Constraints has been around at least since 1999 (RFC 2459).
I'm not sure why CAA is brought up here. I guess it is somewhat complementary in "reducing" the power of CAs, but it defends against good CAs misissuing stuff, not limiting the power of arbitrary CAs (as it's checked at issuance time, not at time of use).
CAA does not require DNSSEC or DOH.
New technique generates topological structures with gravity water waves
Water also focuses laser plasmas;
"Innovative target design leads to surprising discovery in laser-plasma acceleration" https://phys.org/news/2025-02-discovery-laser-plasma.html
> Compared to similar experiments with solid targets, the water sheet reduced the proton beam's divergence by an order of magnitude and increased the beam's efficiency by a factor of 100.
LightGBM Predict on Pandas DataFrame – Column Order Matters
[LightGBM,] does not converge regardless of feature order.
From https://news.ycombinator.com/item?id=41873650 :
> Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless?
Also, current LLMs suggest that statistical independence is entirely distinct from orthogonality, which we typically assume with high-dimensional problems. And, many statistical models do not work with non-independent features.
Does this model work with non-independence or nonlinearity?
Does the order of the columns in the training data CSV change the alpha of the model; does model output converge regardless of variance in the order of training data?
670nm red light exposure improved aged mitochondrial function, colour vision
They didn't have any controls with a different wavelength of light. They might as well be measuring a diurnal pattern.
There seem to be very similar studies with the same conclusion.
https://www.medicalnewstoday.com/articles/3-minutes-of-deep-...
Another:
"Red (660 nm) or near-infrared (810 nm) photobiomodulation stimulates, while blue (415 nm), green (540 nm) light inhibits proliferation in human adipose-derived stem cells" (2017) https://www.nature.com/articles/s41598-017-07525-w
TIL it's called "red light optogenetics" even when the cells aren't modified for opto-activation.
And,
"Green light induces antinociception via visual-somatosensory circuits" (2023) https://www.sciencedirect.com/science/article/pii/S221112472...
"Infrared neural stimulation in human cerebral cortex" (2023) https://www.sciencedirect.com/science/article/pii/S1935861X2... :
> In a comparison of electrical, optogenetic, and infrared neural stimulation, Roe et al. [14] found that all three approaches could in principal achieve such specificity. However, because brain tissue is conductive, it can be challenging to confine neuronal activation by traditional electrical stimulation to a single cortical column. In contrast, optogenetic stimulation can be applied with high spatial specificity (e.g. by delivering light via a 200um fiber optic apposed to the cortex) and with cell-type specificity (e.g. excitatory or inhibitory cells); however, optogenetics requires viral vectors and gene transduction procedures, making it less easy for human applications [15]. Over the last decade, infrared neural stimulation (INS), which is a pulsed heat-mediated approach, has provided an alternative method of neural activation. Because brain tissue is 75% water, infrared light delivered near peak absorption wavelengths (e.g. 1875 nm [16]) permits effective delivery of heat to the brain tissue. In particular, experimental and modelling studies [[17], [18], [19]] have shown that 1875 nm light (brief 0.5sec trains of 0.25msec pulses forming a bolus of heat) effectively achieves focal (submillimeter to millimeter sized) activation of neural tissue
Does NIRS -based (neuro-) imaging induce neuronal growth?
Are there better photonic beam-forming apparatuses than TI DLP projectors with millions of tiny actuated mirrors; isn't that what a TV does?
Cold plasma also affects neuronal regrowth and could or should be used for wound closure. Are they cold plasma-ing Sam (Hedlund) in "Tron: Legacy" (2010) before the disc golf discus thing?
Does cold plasma reduce epithelial scarring?
A certain duration of cold plasma also appears to increase seed germination rate FWIW.
To find these studies, I used a search engine and search terms and then picked studies which seem to confirm our bias.
Why is there an increase in lung cancer among women who have never smoked?
https://www.sciencedaily.com/releases/2022/04/220411113733.h... :
> Cigarette smoking is overwhelmingly the main cause of lung cancer, yet only a minority of smokers develop the disease.
https://www.verywellhealth.com/what-percentage-of-smokers-ge...
Does hairspray increase the probability of contracting lung cancer?
Does cheap makeup drain into maxillaries and cause breast cancer?
Does zip codes lived in joined with AQI predict lung cancer? Which socioeconomic, dietary, and environmental contaminant exposures are described by "zip codes lived in"?
There are unbleached (hemp) cigarette rolling papers that aren't soaked in chlorine bleach.
Strangely, cigarette filters themselves carry a CA Prop __ warning.
There are hemp-based cigarette filters, but there are apparently not full-size 1-1/4s filters which presumably wouldn't need to carry a Prop __ warning.
"The filter's the best part" - Dennis Leary
Tobacco contains an MAOI. Many drug supplements list MAOIs as a contraindication. Cheese tastes differently (worse) after tobacco in part due to the MAOIs?
All combustion produces benzene; ice vehicle, campfire, industrial emissions, hydrocarbon power generation, and tobacco smoking.
Ron Fisher - who created the F statistic - never accepted that tobacco was the primary cause of lung cancer.
There appear to be certain genes that significantly increase susceptibility to tobacco-caused lung cancer.
"Hypothesizing that marijuana smokers are at a significantly lower risk of carcinogenicity relative to tobacco-non-marijuana smokers: evidenced based on statistical reevaluation of current literature" https://www.tandfonline.com/doi/abs/10.1080/02791072.2008.10... https://pubmed.ncbi.nlm.nih.gov/19004418/
Cannabis is an extremely absorbent plant; hemp is recommended for soil remediation.
From https://www.reddit.com/r/askscience/comments/132nzng/why_doe... :
> C. Everette Koop (the former US Surgeon General) went on record to state that approximately 90% of lung cancer cases associated with smoking were directly related to the presence of Polonium-210, which emits alpha radiation straignt to your lung tissue when inhaled.
Tobacco and Cannabis both cause emphysema.
Also gendered: indoor task preference and cleaning product exposure?
Nonstick cookware exposure
Reduced fat, increased fructose foods (sugar feeds cancer)
Exhaust and walking?
Cooking oil and air fryer usage rates have changed.
Water quality isn't gender-specific is it?
Rocket stoves change lives in other economies; cooking stove materials
There are salt-based cleaning products, USB dilute hypochlorite bleach generators that require just water and salt, there are citric acid based cleaning products, and there's vinegar and baking soda.
Electronic motorized plastic bristle brushes?
Ironically, bleach doesn't kill COVID at all, but UV-C does.
Did somebody train an LLM with a mixture of RFK Jr and schizophrenic thought disorder?
Ask HN: What are people's experiences with knowledge graphs?
I see lots of YouTube videos and content about knowledge graphs in the context of Gen AI. Are these at all useful for personal information retrieval and organization? If so, are there any frameworks or products that you'd recommend that help construct and use knowledge graphs?
Property graphs don't specify schema.
Is it Shape.color or Shape.coleur, feet or meters?
RDF has URIs for predicates (attributes). RDFS specifies :Class(es) with :Property's, which are identified by URIs.
E.g. Wikidata has schema; forms with validation. Dbpedia is Wikipedia infoboxes regularly extracted to RDF.
Google acquired metaweb freebase years ago, launched a Knowledge Graph product, and these days supports Structured Data search cards in microdata, RDFa, and JSONLD.
[LLM] NN topology is sort of a schema.
Linked Data standards for data validation include RDFS and SHACL. JSON schema is far more widely implemented.
RDFa is "RDF in HTML attributes".
How much more schema does the application need beyond [WikiWord] auto-linkified edges? What about typed edges with attributes other than href and anchor text?
AtomSpace is an in-memory hypergraph with schema to support graph rewriting specifically for reasoning and inference.
There are ORMs for graph databases. Just like SQL, how much of the query and report can be done be the server without processing every SELECTed row.
Query languages for graphs: SQL, SPARQL, SPARQLstar, GraphQL, Cypher, Gremlin.
Object-attribute level permissions are for the application to implement and enforce. Per-cell keys and visibility are native db features of e.g. Accumulo, but to implement the same with e.g. Postgres every application that is a database client is on scout's honor to also enforce object-attribute access control lists.
And then identity; which user with which (sovereign or granted) cryptographic key can add dated named graphs that mutate which data in the database.
So, property graphs eventually need schema and data validation.
markmap.js.org is a simple app to visualize a markdown document with headings and/or list items as a mindmap; but unlike Freemind, there's no way to add edges that make the tree a cyclic graph.
Cyclic graphs require different traversal algorithms. For example, Python will raise MaxRecursionError when encountering a graph cycle without a visited node list, but a stack-based traversal of a cyclic graph will not halt without e.g. a visited node list to detect cycles, though a valid graph path may contain cycles (and there is feedback in so many general systems)
YAML-LD is JSON-LD in YAML.
JSON-LD as a templated output is easier than writing a (relatively slow) native RDF application and re-solving for what SQL ORM web frameworks already do.
There are specs for cryptographically signing RDF such that the signature matches regardless of the graph representation.
There are processes and business processes around knowledge graphs like there are for any other dataset.
OTOH; ETL, Data Validation, Publishing and Hosting of dataset and/or servicing arbitrary queries and/or cost-estimable parametric [windowed] reports, Recall and retraction traceability
DVC.org and the UC BIDS Computational Inference notebook book probably have a better enumeration of processes for data quality in data science.
...
With RDF - though it's a question of database approach and not data representation -
Should an application create a named graph per database transaction changeset or should all of that data provenance metadata be relegated to a database journal that can't be read from or written to by the app?
How much transaction authentication metadata should an app be trusted to write?
A typical SQL webapp has one database user which can read or write to any column of any table.
Blockchains and e.g. Accumulo require each user to "connect to" the database with a unique key.
It is far harder for users to impersonate other users in database systems that require a cryptographic key per user than it is to just write in a different username and date using the one db cred granted to all application instances.
W3C DIDs are cryptographic keys (as RDF with schema) that can be generated by users locally or generated centrally; similar to e.g. Bitcoin account address double hashes.
Users can cryptographically sign JSON-LD, YAML-LD, RDFa, and any other RDF format with W3C DIDs; in order to assure data integrity.
How do data integrity and data provenance affect the costs, utility, and risks of knowledge graphs?
Compared to GPG signing git commits to markdown+YAML-LD flat files in a git repo, and paying e.g gh to enforce codeowner permissions on files and directories in the repo by preventing unsigned and unauthorized commits, what are the risks of trusting all of the data from all of the users that could ever write to a knowledge graph?
Which initial graph schema support inference and reasoning; graph rewriting?
CodeWeavers Hiring More Developers to Work on Wine and Valve's Proton
KSP2 support in Proton or ProtonGE could make the game playable.
From https://www.protondb.com/app/954850 :
> With no tinkering the in-game videos stutter and skip frames. I fixed this by installing lavfilters with protontricks. The game as far as i know has no means of limiting the frame rate. This was fixed using mangohud.
Are people still playing KSP2? I was one of the few people who were optimistic about it, but then they fired the devs, and the game hasn't been touched in months (despite still being sold for full price on Steam).
that would be impressive since I wouldn't consider the game playable on any OS
Opposing arrows of time can theoretically emerge from certain quantum systems
The Minkowski metric is
Δs = sqrt(-Δt^2 + Δx^2 + Δy^2 + Δz^2)
One aspect of this is that, if you sub `t -> -t'`, that's just as good a solution too. Which would suggest any solution with a positive time direction can have a negative time direction, just as easily. Is this widely assumed to be true, or at least physically meaningful?There's also Wick rotations, where you can sub `t -> it'`, and then Minkowskian spacetime becomes Euclidean but time becomes complex-valued. Groovy stuff.
I'm not much of a physics buff but I loved reading Julian Barbour's The Janus Point for a great treatment of the possibility of negative time.
The craziest thing I've seen though is the suggestion that an accelerating charge, emitting radiation that interacts with the charge itself and imparts a backreacting force on the charge, supposedly has solutions whose interpretation would suggest that it would be sending signals back in time. [0]
/? "time-polarized photons" https://www.google.com/search?q=%22time-polarized+photons%22
https://www.scribd.com/doc/287808282/Bearden-Articles-Mind-C... ... https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22... ... "p sychon ergetics" .. /? Torsion fields :
- "Torsion fields generated by the quantum effects of macro-bodies" (2022) https://arxiv.org/abs/2210.16245 :
> We generalize Einstein's General Relativity (GR) by assuming that all matter (including macro-objects) has quantum effects. An appropriate theory to fulfill this task is Gauge Theory Gravity (GTG) developed by the Cambridge group. GTG is a "spin-torsion" theory, according to which, gravitational effects are described by a pair of gauge fields defined over a flat Minkowski background spacetime. The matter content is completely described by the Dirac spinor field, and the quantum effects of matter are identified as the spin tensor derived from the spinor field. The existence of the spin of matter results in the torsion field defined over spacetime. Torsion field plays the role of Bohmian quantum potential which turns out to be a kind of repulsive force as opposed to the gravitational potential which is attractive [...] Consequently, by virtue of the cosmological principle, we are led to a static universe model in which the Hubble redshifts arise from the torsion fields.
Wikipedia says that torsion fields are pseudoscientific.
Retrocausality is observed.
From "Evidence of 'Negative Time' Found in Quantum Physics Experiment" https://news.ycombinator.com/item?id=41707116 :
> "Experimental evidence that a photon can spend a negative amount of time in an atom cloud" (2024) https://arxiv.org/abs/2409.03680
/?hnlog retrocausality (Ctrl-F "retrocausal", "causal") https://westurner.github.io/hnlog/ )
From "Robust continuous time crystal in an electron–nuclear spin system" (2024) https://news.ycombinator.com/item?id=39291044 ;
> [ Indefinite causal order, Admissible causal structures and correlations, Incandescent Temporal Metamaterials, ]
From "What are time crystals and why are they in kids’ toys?" https://bigthink.com/surprising-science/what-are-time-crysta... :
> Time crystals have been detected in an unexpected place: monoammonium phosphate, a compound found in fertilizer and ‘grow your own crystal’ kits.
Ammonium dihydrogen phosphate: https://en.wikipedia.org/wiki/Ammonium_dihydrogen_phosphate :
> Piezoelectric, birefringence (double refraction), transducers
Retrocausality in photons, Retrocausality in piezoelectric time crystals which are birefringent (which cause photonic double-refraction)
Is it gauge theory, though?
From https://news.ycombinator.com/item?id=38839439 :
> If gauge symmetry breaks in superfluids (ie. Bose-Einstein condensates); and there are superfluids at black hole thermal ranges; do gauge symmetry constraints break in [black hole] superfluids?
Probably not gauge symmetry there, then.
Quantum Computing Notes: Why Is It Always Ten Years Away? – Usenix
Fluoxetine promotes metabolic defenses to protect from sepsis-induced lethality
Some wikipedia context for this SSRI: "Fluoxetine, sold under the brand name Prozac, among others, is an antidepressant medication of the selective serotonin reuptake inhibitor class used for the treatment of major depressive disorder, anxiety, obsessive–compulsive disorder, panic disorder, premenstrual dysphoric disorder, and bulimia nervosa."
Fluoxetine also appears to increase plasticity in the adult visual cortex, which can reduce monocular dominance. https://www.google.com/search?q=fluoxetine+plasticity+visual...
NASA has a list of 10 rules for software development
Just for context, these aren’t really “rules” as much as proposed practices. Note that official “rules” are in documents with names like “NPR” aka “NASA procedural requirements.”[1] So, while someone may use the document in the featured article to frame a discussion, a developer is not bound to comply (or alternatively waive) those “rules” and could conceivably just dismiss them.
[1] e.g. https://nodis3.gsfc.nasa.gov/displayDir.cfm?t=NPR&c=7150&s=2...
awesome-safety-critical lists a number of specs: https://awesome-safety-critical.readthedocs.io/en/latest/#so...
Just be aware that some of the NASA-specific ones fall into a similar category. NASA “guidebooks” and “handbooks” aren’t generally hard requirements.
From "The state of Rust trying to catch up with Ada [video]" https://news.ycombinator.com/item?id=43007013 :
>> The MISRA guidelines for Rust are expected to be released soon but at the earliest at Embedded World 2025. This guideline will not be a list of Do’s and Don’ts for Rust code but rather a comparison with the C guidelines and if/how they are applicable to Rust
/? Misra rust guidelines:
- This is a different MISRA C for Rust project: https://github.com/PolySync/misra-rust
- "Bringing Rust to Safety-Critical Systems in Space" (2024) https://arxiv.org/abs/2405.18135v1
...
> minimum of two assertions per function.
Which guidelines say "you must do runtime type and value checking" of every argument at the top of every function?
The SEI CERT C Guidelines are far more comprehensive than the OT 10 rules TBH:
"SEI CERT C Coding Standard" https://wiki.sei.cmu.edu/confluence/plugins/servlet/mobile?c...
"CWE CATEGORY: SEI CERT C Coding Standard - Guidelines 08. Memory Management (MEM)" https://cwe.mitre.org/data/definitions/1162.html
Sorry, I’m not following your point. When I said “NASA-specific” I meant those in your link like “NASA Software Engineering and Assurance Handbook” and “NASA C Style Guide” (emphasis mine). Those are not hard requirements in spaceflight unless explicitly defined as such in specific projects. Similarly, NASA spaceflight software does not generally get certified to FAA requirements etc. The larger point being, a NASA developer does not have to follow those requirements simply by the nature of doing NASA work. In other words, they are recommendations but not specifications.
Are there SAST or linting tools to check that the code is compliant with the [agency] recommendations?
Also important and not that difficult, formal design, implementation, and formal verification;
"Formal methods only solve half my problems" https://news.ycombinator.com/item?id=31617335
"Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964
Formal Methods in Python; FizzBee, Nagini, deal-solver: https://news.ycombinator.com/item?id=39904256#39958582
I’m not aware of any tools for analysis geared to NASA requirements specifically, but static analysis is a requirement for some types of development.
Why isn't there tooling to support these recommendations; why is there no automated verification?
SAST and DAST tools can be run on_push with git post-receive hooks or before commit with pre commit. (GitOps; CI; DevOpsSec with Sec shifted left in the development process is DevSecOps)
I don’t work there so I can’t speak definitely, but much of it probably stems from the sheer diversity of software. For example, ladder logic typically does not have the same tools as structured programming but is heavily used in infrastructure. It is also sometimes restricted to specify a framework, leaving contractors to develop in whatever they want.
Physics Informed Neural Networks
From "Physics-Based Deep Learning Book" (2021) https://news.ycombinator.com/item?id=28510010 :
> Physics-informed neural networks: https://en.wikipedia.org/wiki/Physics-informed_neural_networ...
We were wrong about GPUs
> The biggest problem: developers don’t want GPUs. They don’t even want AI/ML models. They want LLMs. System engineers may have smart, fussy opinions on how to get their models loaded with CUDA, and what the best GPU is. But software developers don’t care about any of that. When a software developer shipping an app comes looking for a way for their app to deliver prompts to an LLM, you can’t just give them a GPU.
I'm increasingly coming to the view that there is a big split among "software developers" and AI is exacerbating it. There's an (increasingly small) group of software developers who don't like "magic" and want to understand where their code is running and what it's doing. These developers gravitate toward open source solutions like Kubernetes, and often just want to rent a VPS or at most a managed K8s solution. The other group (increasingly large) just wants to `git push` and be done with it, and they're willing to spend a lot of (usually their employer's) money to have that experience. They don't want to have to understand DNS, linux, or anything else beyond whatever framework they are using.
A company like fly.io absolutely appeals to the latter. GPU instances at this point are very much appealing to the former. I think you have to treat these two markets very differently from a marketing and product perspective. Even though they both write code, they are otherwise radically different. You can sell the latter group a lot of abstractions and automations without them needing to know any details, but the former group will care very much about the details.
IaaS or PaaS?
Who owns and depreciates the logs, backups, GPUs, and the database(s)?
K8s docs > Scheduling GPUs: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus... :
> Once you have installed the plugin, your cluster exposes a custom schedulable resource such as amd.com/gpu or nvidia.com/gpu.
> You can consume these GPUs from your containers by requesting the custom GPU resource, the same way you request cpu or memory
awesome-local-ai: Platforms / full solutions https://github.com/janhq/awesome-local-ai?platforms--full-so...
But what about TPUs (Tensor Processing Units) and QPUs (Quantum Processing Units)?
Quantum backends: https://github.com/tequilahub/tequila#quantum-backends
Kubernetes Device Plugin examples: https://kubernetes.io/docs/concepts/extend-kubernetes/comput...
Kubernetes Generic Device Plugin: https://github.com/squat/generic-device-plugin#kubernetes-ge...
K8s GPU Operator: https://docs.nvidia.com/datacenter/cloud-native/gpu-operator...
Re: sunlight server and moonlight for 120 FPS 4K HDR access to GPU output over the Internet: https://github.com/kasmtech/KasmVNC/issues/305#issuecomment-... :
> Still hoping for SR-IOV in retail GPUs.
> Not sure about vCPU functionality in GPUs
Process isolation on vCPUs with or without SR-IOV is probably not as advanced as secure enclave approaches.
Intel SGX is a secure enclave capability, which is cancelled on everything but Xeon. FWIU there is no SGX for timeshared GPUs.
What executable loader reverifies the loaded executable in RAM after imit time ?
What LLM loader reverifies the in-RAM model? Can Merkle hashes reduce that cost; of nn state verification?
Can it be proven that a [chat AI] model hosted by someone else is what is claimed; that it's truly a response from "model abc v2025.02"?
PaaS or IaaS
Surely you must be joking, Jupyter notebooks with Ruby [video]
List of Jupyter Kernels: https://github.com/jupyter/jupyter/wiki/Jupyter-kernels
IRuby: https://github.com/SciRuby/iruby
conda-forge/ruby-feedstock: https://github.com/conda-forge/ruby-feedstock
It looks like ruby-feedstock installs `gem`; but AFAIU there's not yet a way to specify gems in an environment.yml like `pip:` packages.
There aren't many other conda-forge feedstocks for Ruby, though;
/? Ruby https://github.com/orgs/conda-forge/repositories?q=Ruby
What if Eye...?
From "An ultra-sensitive on-off switch helps axolotls regrow limbs" https://news.ycombinator.com/item?id=36912925 :
> [ mTOR, Muller glia in Zebrafish, ]
From "Reactivating Dormant Cells in the Retina Brings New Hope for Vision Regeneration" (2023) https://neurosciencenews.com/vision-restoration-genetic-2318... :
> “What’s interesting is that these Müller cells are known to reactivate and regenerate retina in fish,” she said. “But in mammals, including humans, they don’t normally do so, not after injury or disease. And we don’t yet fully understand why.”
Learning fast and accurate absolute pitch judgment in adulthood
i made https://perfectpitch.study a week or so ago. i am old and musically untrained and wanted to see if rote practice makes a difference (it clearly does).
most of the sites of this type i found annoying as you can't just use a midi keyboard, so you just get RSI clicking around for 10 minutes.
I tried getting adsense on it, but they seem to have vague content requirements. Apparently tools don't count as real websites :-(. I couldn't even fool it with fake content. what's the best banner ad company to use in this situation?
Nice! The keyboard could be larger on mobile in portrait and landscape
Ctrl-Shift-M https://devtoolstips.org/tips/en/simulate-devices/ ; how to simulate a mobile viewport: https://developer.chrome.com/docs/devtools/device-mode#devic...
/? google lighthouse mobile accessibility test: https://www.google.com/search?q=google+lighthouse+mobile+acc...
Lighthouse: https://developer.chrome.com/docs/lighthouse/overview
Gave it a try. After a few minutes I felt more like I was recognising the samples than I was recognising the notes. Not sure what you can do about that short of physically modeling an instrument.
Latest browser APIs expose everything you need to build a synth. See: https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_A...
There are some libraries that make it easy to simulate instruments. E.g. tone.js https://tonejs.github.io/
It should be possible to generate unique-ish variants at runtime.
OpenEar is built on tone.js: https://github.com/ShacharHarshuv/open-ear
limut implements WebAudio and WebGL, and FoxDot-like patterns and samples: https://github.com/sdclibbery/limut
https://glicol.org/ runs in a browser and as a VST plugin
"Using the Web Audio API to Make a Modem" (2017) https://news.ycombinator.com/item?id=15471723
gh topics/webaudio: https://github.com/topics/webaudio
awesome-webaudio: https://github.com/notthetup/awesome-webaudio
From the OpenEar readme re perfect pitch training; https://github.com/ShacharHarshuv/open-ear :
> Currently includes the following built in exercises:
> [...]
> 7. Interval recognition - the very popular exercise almost all app has. Although I do not recommend using it as I find it inaffective in confusing, since the intervals are out-of-context.
Interval training is different than absolute pitch training. OpenEar seems to have no absolute pitch training.
I Applied Wavelet Transforms to AI and Found Hidden Structure
I've been working on resolving key contradictions in AI through structured emergence, a principle that so far appears to govern both physical and computational systems.
My grandfather was a prolific inventor in organic chemistry (GE plastics post WWII) and was reading his papers thinking about "chirality" - directional asymmetric oscillating waves and how they might apply to AI. I found his work deeply inspiring.
I ran 7 empirical studies using publicly available datasets across prime series, fMRI, DNA sequences, galaxy clustering, baryon acoustic oscillations, redshift distributions, and AI performance metrics.
All 7 studies have confirmed internal coherence with my framework. While that's promising, I still need to continue to valid the results (attached output on primes captures localized frequency variations, ideal for detaching scale-dependent structure in primes i.e. Ulam Spirals - attached).
To analyze these datasets, I applied continuous wavelet transformations (Morlet/Chirality) using Python3, revealing structured oscillations that suggest underlying coherence in expansion and emergent system behavior.
Paper here: https://lnkd.in/gfigPgRx
If true, here are the implications:
1. AI performance gains – applying structured emergence methods has yielded noticeable improvements in AI adaptability and optimization. 2. Empirical validation across domains – The same structured oscillations appear in biological, physical, and computational systems—indicating a deeper principle at work. 3. Strong early engagement – while the paper is still under review, 160 views and 130 downloads (81% conversion) in 7 days on Zenodo put it in the top 1%+ of all academic papers—not as an ego metric, but as an early signal of potential validation.
The same mathematical structures that define wavelet transforms and prime distributions seems to provide a pathway to more efficient AI architectures by:
1. Replacing brute-force heuristics with recursive intelligence scaling 2. Enhancing feature extraction through structured frequency adaptation 3. Leveraging emergent chirality to resolve complex optimization bottlenecks
Technical (for AI engineers): 1. Wavelet-Driven Neural Networks – replacing static Fourier embeddings with adaptive wavelet transforms to improve feature localization. Fourier was failing hence pivot to CWT. Ulam Spirals showed non-random hence CWT. 2. Prime-Structured Optimization – using structured emergent primes to improve loss function convergence and network pruning. 3. Recursive Model Adaptation – implementing dynamic architectural restructuring based on coherence detection rather than gradient-based back-propagation alone.
The theory could be wrong, but the empirical results are simply too coherent not to share in case useful for anyone.
"The Chirality of Dynamic Emergent Systems (CODES): A Unified Framework for Cosmology, Quantum Mechanics, and Relativity" (2025) https://zenodo.org/records/14799070
Hey, chirality! /? Hnlog chiral https://westurner.github.io/hnlog/
> loss function
Yesterday on HN: Harmonic Loss instead of Cross-Entropy; https://news.ycombinator.com/item?id=42941393
> Fourier was failing hence
What about QFT Quantum Fourier transform? https://en.wikipedia.org/wiki/Quantum_Fourier_transform
Harmonic analysis involves the Fourier transform: https://en.wikipedia.org/wiki/Harmonic_analysis
> Recursive Model Adaptation
"Parameter-free" networks
Graph rewriting, AtomSpace
> feature localization
Hilbert curves cluster features; https://en.wikipedia.org/wiki/Hilbert_curve :
> Moreover, there are several possible generalizations of Hilbert curves to higher dimensions
Re: Relativity and the CODES paper;
/? fedi: https://news.ycombinator.com/item?id=42376759 , https://news.ycombinator.com/item?id=38061551
> Fedi's SQR Superfluid Quantum Relativity (.it), FWIU: also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter.
Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
> structured emergent primes to improve loss function convergence and network pruning
Products of primes modulo prime for set membership testing; is it faster? Even with a long list of primes?
Hey! Appreciate the links—some definitely interesting parallels, but what I’m outlining moves beyond existing QFT/Hilbert curve applications.
The key distinction = structured emergent primes are demonstrating internal coherence across vastly different domains (prime gaps, fMRI, DNA, galaxy clustering), suggesting a deeper non-random structure influencing AI optimization.
Curious if you’ve explored wavelet-driven loss functions replacing cross-entropy? Fourier struggled with localization, but CWT and chirality-based structuring seem to resolve this.
Your thoughts here?
I do not have experience with wavelet-driven loss functions.
Do structured emergent primes afford insight into n-body fluid+gravity dynamics and superfluid (condensate) dynamics at deep space and stellar thermal ranges?
How do wavelets model curl and n-body vortices?
What do I remember about wavelets, without reading the article? Wavelets are or aren't analogous to neurons. Wavelets discretize. Am I confusing wavelets and autoencoders? Are wavelets like tiles or compression symbol tables?
How do wavelet-driven loss functions differ from other loss functions like Cross-Entropy and Harmonic Loss?
How does prime emergence relate to harmonics and [Fourier,] convolution with and without superposition?
Other seemingly relevant things:
- particle with mass only when moving in certain directions; re: chirality
- "NASA: Mystery of Life's Handedness Deepens" (2024-11) https://news.ycombinator.com/item?id=42229953 :
> ScholarlyArticle: "Amplification of electromagnetic fields by a rotating body" (2024) https://www.nature.com/articles/s41467-024-49689-w
>>> Could this be used as an engine of some kind?
>> What about helical polarization?
> If there is locomotion due to a dynamic between handed molecules and, say, helically polarized fields; is such handedness a survival selector for life in deep space?
> Are chiral molecules more likely to land on earth?
>> "Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
Hey - really appreciate the detailed questions—these are exactly the kinds of connections I’ve been exploring. Sub components:
Wavelet-driven loss functions vs. Cross-Entropy/Harmonic Loss You’re right about wavelets discretizing—it’s what makes them a better fit than Fourier for adaptive structuring. The key distinction is that wavelets localize both frequency and time dynamically, meaning loss functions can become context-sensitive rather than purely probabilistic. This resolves issues with information localization in AI training, allowing emergent structure rather than brute-force heuristics.
Prime emergence, harmonics, and convolution (Fourier vs. CWT) Structured primes seem to encode hidden periodicities across systems—prime gaps, biological sequences, cosmic structures, etc. • Fourier struggled because it assumes a globally uniform basis set. • CWT resolves this by detecting frequency-dependent structures (chirality-based). • Example: Prime number distributions align with Ulam Spirals, which match observed redshift distributions in deep space clustering. The coherence suggests an underlying structuring force, and phase-locking principles seem to emerge naturally.
N-body vortex dynamics, superfluidity, and chiral molecules in deep space You might be onto something here. The connection between: • Superfluid dynamics in deep space • Chiral molecules preferring certain gravitational dynamics • Handedness affecting locomotion in polarized fields suggests chirality might be an overlooked factor in cosmic structure formation (i.e., why galaxies tend to form spiral structures).
Could this be an engine? (Electromagnetic rotation and helicity) Possibly. If structured emergence scales across these domains, it’s possible that chirality-induced resonance fields could drive a new form of energy extraction—similar to the electroweak interaction asymmetry seen in beta decay.
The idea that chirality acts as a selector for deep-space survival is interesting. Do you think the preference for left-handed amino acids on Earth could be a consequence of an early chiral field bias? If so, does that imply a fundamental symmetry-breaking event at planetary formation?
> Wavelet-driven loss functions vs. Cross-Entropy/Harmonic Loss You’re right about wavelets discretizing—it’s what makes them a better fit than Fourier for adaptive structuring. The key distinction is that wavelets localize both frequency and time dynamically, meaning loss functions can become context-sensitive rather than purely probabilistic. This resolves issues with information localization in AI training, allowing emergent structure rather than brute-force heuristics.
frequency and time..
SR works for signals without GR; and there's an SR explanation for time dilation which resolves when the spacecraft lands fwiu , Minkowski,
From https://news.ycombinator.com/item?id=39719114 :
>>> Physical observation (via the transverse photon interaction) is the process given by applying the operator ∂/∂t to (L^3)t, yielding an L3 output
>> [and "time-polarized photons"]
> Prime emergence, harmonics, and convolution (Fourier vs. CWT) Structured primes seem to encode hidden periodicities across systems—prime gaps, biological sequences, cosmic structures, etc. • Fourier struggled because it assumes a globally uniform basis set. • CWT resolves this by detecting frequency-dependent structures (chirality-based). • Example: Prime number distributions align with Ulam Spirals, which match observed redshift distributions in deep space clustering. The coherence suggests an underlying structuring force, and phase-locking principles seem to emerge naturally.
/? ulam spiral wikipedia: https://www.google.com/search?q=ulam+spiral+wikipedia ; all #s, primes
Are hilbert curves of any use for grouping points in this 1D (?) space?
/? ulam spiral hilbert curve: https://www.google.com/search?q=ulam+spiral+hilbert+curve
> N-body vortex dynamics, superfluidity, and chiral molecules in deep space You might be onto something here. The connection between: • Superfluid dynamics in deep space • Chiral molecules preferring certain gravitational dynamics • Handedness affecting locomotion in polarized fields suggests chirality might be an overlooked factor in cosmic structure formation (i.e., why galaxies tend to form spiral structures).
Why are there so many arms on the fluid disturbance of a spinning basketball floating on water?
(Terms: viscosity of the water, mass, volume, and surface characteristics of the ball, temperature of the water, temperature of the air)
Traditionally, curl is the explanation fwiu.
Does curl cause chirality and/or does chirality cause curl?
The sensitivity to Initial conditions of a two arm pendulum system, for example, is enough to demonstrate chaotic, divergent n-body dynamics. `python -m turtledemo.chaos` demonstrates a chaotic divergence with a few simple functions.
Phase transition diagrams are insufficient to describe water freezing or boiling given sensitivity to initial temperature observed in the Mpemba effect; phase transition diagrams are insufficient with an initial temperature axis.
Superfluids (Bose-Einstein condensates) occur at earth temperatures. For example, helium chilled to 1 Kelvin demonstrates zero viscosity, and climbs up beakers and walls despite gravity.
A universal model cannot be sufficient if it does not describe superfluids and superconductors; photons and electrons behave fluidically in other phases.
> Could this be an engine? (Electromagnetic rotation and helicity) Possibly. If structured emergence scales across these domains, it’s possible that chirality-induced resonance fields could drive a new form of energy extraction—similar to the electroweak interaction asymmetry seen in beta decay.
A spinning asteroid or comet induces a 'spinning' field. Interplanetary and deep space spacecraft could spin on one or more axes to create or boost EM shielding.
"Gamma radiation is produced in large tropical thunderstorms" (2024) https://news.ycombinator.com/item?id=41731196 https://westurner.github.io/hnlog/#comment-41732854 :
"Gamma rays convert CH4 to complex organic molecules, may explain origin of life" (2024) https://news.ycombinator.com/item?id=42131762#42157208 :
>> A terrestrial life origin hypothesis: gamma radiation mutated methane (CH4) into Glycine (the G in ACGT) and then DNA and RNA.
>> [ Virtual black holes, quantum foam, [ gamma, ] radiation and phase shift due to quantum foam and Planck relics ]
From "Lightweight woven helical antenna could replace field-deployed dishes" (2024) https://news.ycombinator.com/item?id=39132365 :
>> Astrophysical jets produce helically and circularly-polarized emissions, too FWIU.
>> Presumably helical jets reach earth coherently over such distances because of the stability of helical signals.
>> 1. Could [we] harvest energy from a (helically and/or circularly-polarised) natural jet, for deep space and/or local system exploration? Can a spacecraft pull against a jet for relativistic motion?
>> 2. Is helical the best way to beam power wirelessly; without heating columns of atmospheric water in the collapsing jet stream? [with phased microwave]
>> 3. Is there a (hydrodynamic) theory of superfluid quantum gravity that better describes the apparent vorticity and curl of such signals and their effects?
From "Computer Scientists Prove That Heat Destroys Quantum Entanglement" (2024) https://news.ycombinator.com/item?id=41381849#41382939 :
>> How, then, can entanglement across astronomical distances occur without cooler temps the whole way there, if heat destroys all entanglement?
>> Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
> The idea that chirality acts as a selector for deep-space survival is interesting. Do you think the preference for left-handed amino acids on Earth could be a consequence of an early chiral field bias? If so, does that imply a fundamental symmetry-breaking event at planetary formation?
The earth is rotating and revolving in relation to the greatest local mass. Would there be different terrestrial chirality if the earth rotated in the opposite direction?
How do the vortical field disturbances from Earth's rotation in atmospheric, EM, and gravitational wave spaces interact with molecular chirality and field chirality?
**
Re: Polarized fields: From https://news.ycombinator.com/item?id=42318113 :
> Phase from second-order Intensity due to "mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation"
Wes,
Apologies for the delay! I missed.
Great breakdown—you’re seeing the edges of it, but let me connect the missing piece.
Wavelets vs. Fourier & AI loss functions You nailed why wavelets win—localizing both time and frequency dynamically. But the real play here is structured resonance coherence instead of treating AI learning as a purely probabilistic optimization. Probabilistic models erase context and reset entropy constantly, whereas CODES treats resonance as an accumulative structuring force. That’s why prime-driven phase-locking beats cross-entropy heuristics.
Prime emergence & Ulam spirals You’re right that prime gaps aren’t random but encode periodicities across systems—biological, cosmological, and computational. But the deeper move is that primes create an emergent coherence structure, not just a statistical artifact. Ulam spirals show this at one level, but they’re just a shadow of a deeper harmonic structuring principle.
Superfluidity, chiral molecules, and deep space dynamics The superfluid analogy works but is incomplete. Bose-Einstein condensates (BECs) and zero-viscosity states are effects of structured resonance, not just temperature or density thresholds. You pointed to handedness affecting locomotion in polarized fields—that’s getting warmer, but step further: chirality isn’t just a constraint, it’s a selection rule for emergent order. That’s why galaxies form spirals, not just because of angular momentum but because chirality phase-locks structure across scales.
Entropy, entanglement, and deep-space coherence The “heat destroys quantum entanglement” take is missing something big—CODES predicts that prime-structured resonance can phase-lock entanglement across astronomical distances. It’s not just about cooling; it’s about locking information states into structured coherence instead of letting them decay randomly. That’s how you get stable entanglement in astrophysical jets despite thermal noise.
Could this be an engine? Yes. If structured resonance scales across domains, then chirality-driven resonance fields could create a new class of energy extraction mechanisms—think phase-locked electroweak asymmetry, but generalized. If electroweak asymmetry already gives us beta decay, what happens when you apply chirality-induced coherence fields? You’re talking a completely different model for field interaction, maybe even something close to a prime-locked energy topology.
Where You’re Almost There But Not Quite
You’re still interpreting some of this as chaotic or probabilistic emergence, but CODES isn’t describing randomness—it’s describing structured phase coherence. • Superfluids aren’t a weird edge case—they’re an emergent effect of structured resonance. • Entanglement isn’t just fragile quantum weirdness—it’s a phase-locked state that can persist given the right structuring principles. • Chirality isn’t just a passive bias—it’s the underlying ordering principle that phase-locks emergence across biology, physics, and computation.
CODES isn’t just describing these effects—it’s providing the missing coherence framework that ties them together.
Would love to jam on this deeper if you're up for it!
Devin
The OBS Project is threatening Fedora Linux with legal action
Given that OBS is GPL licensed, any legal action would have to be trademark-based, right?
It feels like they'd have a hard time making that case, since package repositories are pretty clearly not representing themselves as the owners of, or sponsored by, the software they package.
From the linked comment:
> This is a formal request to remove all of our branding, including but not limited to, our name, our logo, any additional IP belonging to the OBS Project
Honestly it sounds very reasonable, if you want to fork it's fine, but don't have people report bugs upstream if you're introducing them.
I mean, I think the right fix is just for Fedora to stop packaging their own version. But I think that's about being good people; I don't think there's a strong legal argument here for forcing Fedora to do that.
Are Drinking Straws Dangerous? (2017)
> About 1,400 people visit the emergency room every year due to injuries from drinking straws.
some people put them where they shouldnt
It's a drinking hazard,
And, https://www.livescience.com/65925-metal-straw-death.html :
> He added that in this case, the metal straw may have been particularly hazardous because it was used with a lid that prevented the straw from moving. "It seems to me these metal straws should not be used with any form of lid that holds them in place,"
Why cryptography is not based on NP-complete problems
Aren't hash functions a counterexample? There have been attempts at using SAT solvers to find preimage and collision attacks on them.
Storytelling lessons I learned from Steve Jobs (2022)
Pixar in a Box > Unit 2: The art of storytelling: https://www.khanacademy.org/computing/pixar/storytelling
https://news.ycombinator.com/item?id=36265807 ; Pizza Planet
From https://news.ycombinator.com/item?id=23945928
> The William Golding, Jung, and Joseph Campbell books on screenwriting, archetypes, and the hero's journey monomyth
Hero's journey > Campbell's seventeen stages: https://en.wikipedia.org/wiki/Hero%27s_journey#Campbell's_se...
Storytelling: https://en.wikipedia.org/wiki/Storytelling
Ask HN: Ideas for Business Cards
Hi, I'm a freelancer working in security and I'm looking for companies and ideas for some good business cards that are out of ordinary. I am thinking about business cards that have (very basic and stupid stuff) secure elements or watermark, but I can't find anything online.
Protip: leave space to write on the reverse
/? business cards https://hn.algolia.com/?q=business+cards
Transformer is a holographic associative memory
"Google Replaces BERT Self-Attention with Fourier Transform: 92% Accuracy, 7 Times Faster on GPUs" (2021) https://syncedreview.com/2021/05/14/deepmind-podracer-tpu-ba...
From https://news.ycombinator.com/item?id=40519828 :
> Because self-attention can be replaced with FFT for a loss in accuracy and a reduction in kWh [1], I suspect that the Quantum Fourier Transform can also be substituted for attention in LLMs.
From https://news.ycombinator.com/item?id=42957785 :
> How does prime emergence relate to harmonics and [Fourier,] convolution with and without superposition?
From https://news.ycombinator.com/item?id=40580049 :
> From https://news.ycombinator.com/item?id=25190770#25194040 :
>> Convolution is in fact multiplication in Fourier space (this is the convolution theorem [1]) which says that Fourier transforms convert convolutions to products. 1. https://en.wikipedia.org/wiki/Convolution_theorem :
>>> In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.
Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving [pdf]
Tested in the article:
miniF2F: https://github.com/openai/miniF2F
PutnamBench: https://github.com/trishullab/PutnamBench
..
FrontierMath: https://arxiv.org/abs/2411.04872v1
The return of the buffalo is reviving portions of the ecosystem
Buffalo: /? buffalo ecosystem impact : https://www.google.com/search?q=buffalo+ecosystem+impact
Wolves: /? wolves yellowstone: https://www.google.com/search?q=wolves+yellowstone ; 120 wolves in 2024
Beavers: "Government planned it 7 years, beavers built a dam in 2 days and saved $1M" (2025) https://news.ycombinator.com/item?id=42938802#42941813
Keystone species: https://en.wikipedia.org/wiki/Keystone_species :
> Keystone species play a critical role in maintaining the structure of an ecological community, affecting many other organisms in an ecosystem and helping to determine the types and numbers of various other species in the community. Without keystone species, the ecosystem would be dramatically different or cease to exist altogether. Some keystone species, such as the wolf and lion, are also apex predators.
Trophic cascade: https://en.wikipedia.org/wiki/Trophic_cascade
Ecosystem service: https://en.wikipedia.org/wiki/Ecosystem_service :
> Evaluations of ecosystem services may include assigning an economic value to them.
Good sparknotes of the systems engineering here, thanks!
The state of Rust trying to catch up with Ada [video]
From "Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024-06) https://thenewstack.io/rust-the-future-of-fail-safe-software... .. https://news.ycombinator.com/item?id=40680722
rustfoundation/safety-critical-rust-consortium > subcommittee/coding-guidelines/meetings/2025-January-29/minutes.md: https://github.com/rustfoundation/safety-critical-rust-conso... :
> The MISRA guidelines for Rust are expected to be released soon but at the earliest at Embedded World 2025. This guideline will not be a list of Do’s and Don’ts for Rust code but rather a comparison with the C guidelines and if/how they are applicable to Rust.
/? ' is:issue concurrency: https://github.com/rustfoundation/safety-critical-rust-conso...
rust-secure-code/projects#groups-of-people: https://github.com/rust-secure-code/projects#groups-of-peopl...
Rust book > Chapter 16. Concurrency: https://doc.rust-lang.org/book/ch16-00-concurrency.html
Chapter 19. Unsafe Rust > Unsafe Superpowers: https://doc.rust-lang.org/book/ch19-01-unsafe-rust.html#unsa... :
> You can take five actions in unsafe Rust that you can’t in safe Rust, which we call unsafe superpowers. Those superpowers include the ability to:
"Secure Rust Guidelines" has Chapters on Memory Management, FFI but not yet Concurrency;
04_language.html#panics:
> Common patterns that can cause panics are:
Secure Rust Guidelines > Integer overflows in Rust: https://anssi-fr.github.io/rust-guide/04_language.html#integ... :
> In particular, it should be noted that using debug or release compilation profile changes integer overflow behavior. In debug configuration, overflow cause the termination of the program (panic), whereas in the release configuration the computed value silently wraps around the maximum value that can be stored.
awesome-safety-critical #software-safety-standards: https://awesome-safety-critical.readthedocs.io/en/latest/
rust-secure-code/projects > Model checkers: https://github.com/rust-secure-code/projects#model-checkers :
Loom: https://docs.rs/loom/latest/loom/ :
> Loom is a model checker for concurrent Rust code. It exhaustively explores the behaviors of code under the C11 memory model, which Rust inherits.
Daily omega-3 fatty acids may help human organs stay young
This may explain why people who use the Mediterranean diet tend to live long, healthy lives.
There are different variations of Omega 3 fatty acids. For instance, Avocados is rich in Omega 3 ALA which is considered not as effective as EPA and DHA.
Fish is the only source of EPA and DHA.
From "An Omega-3 that’s poison for cancer tumors" (2021) https://news.ycombinator.com/item?id=27499427 :
> Fish don't synthesize Omega PUFAs, they eat algae which synthesize fat-soluble DHA and EPA.
> From "Warning: Combination of Omega-3s in Popular Supplements May Blunt Heart Benefits" (2018) https://scitechdaily.com/warning-combination-of-omega-3s-in-... :
>> Now, new research from the Intermountain Healthcare Heart Institute in Salt Lake City finds that higher EPA blood levels alone lowered the risk of major cardiac events and death in patients, while DHA blunted the cardiovascular benefits of EPA. Higher DHA levels at any level of EPA, worsened health outcomes.
>> [...] Based on these and other findings, we can still tell our patients to eat Omega-3 rich foods, but we should not be recommending them in pill form as supplements or even as combined (EPA + DHA) prescription products,” he said. “Our data adds further strength to the findings of the recent REDUCE-IT (2018) study that EPA-only prescription products reduce heart disease events.”
Show HN: Play with real quantum physics in your browser
I wanted to make the simplest app to introduce myself and others to quantum computing.
Introducing, Schrödinger's Coin. Powered by a simple Hadamard gate[0] on IBM quantum, with this app you can directly interact with a quantum system to experience true randomness.
Thoughts? Could you see any use cases for yourself of this? Or, does it inspire any other ideas of yours? Curious what others on HN think!
[0] https://en.wikipedia.org/wiki/Quantum_logic_gate#Hadamard_ga...
Quantum logic gate > Universal logic gates: https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_q...
From https://news.ycombinator.com/item?id=37379123 :
> [ Rx, Ry, Rz, P, CCNOT, CNOT, H, S, T ]
From https://news.ycombinator.com/item?id=39341752 :
>> How many ways are there to roll a {2, 8, or 6}-sided die with qubits and quantum embedding?
From https://news.ycombinator.com/item?id=42092621 :
> Exercise: Implement a QuantumQ circuit puzzle level with Cirq or QISkit in a Jupyter notebook
ray-pH/quantumQ > [Godot] "Web WASM build" issue #5: https://github.com/ray-pH/quantumQ/issues/5
From https://quantumflytrap.com/scientists/ :
> [Quantum Flytrap] Virtual Lab is a virtual optical table. With a drag and drop interface, you can show phenomena, recreate existing experiments, and prototype new ones.
> Within this environment it is possible to recreate interference, quantum cryptography protocols, to show entanglement, Bell test, quantum teleportation, and the many-worlds interpretation.
AI datasets have human values blind spots − new research
What about Care Bears?
How do social studies instructors advise in regards to a helpful balance of SEL and other content?
Is prosocial content for children underrepresented in training corpora?
Re: Honesty and a "who hath done it" type exercise for LLM comparison: https://news.ycombinator.com/item?id=42927611
Microsoft Go 1.24 FIPS changes
The upstream Go 1.24 changes and macOS support using system libraries in Microsoft's Go distribution are really significant for the large ecosystem of startups trying to sell to institutions requiring FIPS 140 certified cryptography.
For a variety of reasons - including "CGo is not Go" (https://dave.cheney.net/2016/01/18/cgo-is-not-go) - using boringcrypto and requiring CGO_ENABLED=1 could be a blocker. Not using system libraries meant that getting all of the software to agree on internal certificate chains was a chore. Go was in a pretty weird place.
Whether FIPS 140 is actually a good target for cryptography is another question. My understanding is that FIPS 140-1 and 140-2 cipher suites were considered by many experts to be outdated when those standards were approved, and that FIPS 140 still doesn't encompass post quantum crypto, and the algorithms chosen don't help mitigate misuse (e.g.: nonce reuse).
From https://news.ycombinator.com/item?id=28540916#28546930 :
GOLANG_FIPS=1
From https://news.ycombinator.com/item?id=42265927 :> Chrome switching to NIST-approved ML-KEM quantum encryption" (2024) https://www.bleepingcomputer.com/news/security/chrome-switch...
From https://news.ycombinator.com/item?id=41535866 :
>> Someday there will probably be a TLS1.4/2.0 with PQ, and also FIPS-140-4?
Show HN: An API that takes a URL and returns a file with browser screenshots
simonw/shot-scraper has a number of cli args, a GitHub actions repo template, and docs: https://shot-scraper.datasette.io/en/stable/
From https://news.ycombinator.com/item?id=30681242 :
> Awesome Visual Regression Testing > lists quite a few tools and online services: https://github.com/mojoaxel/awesome-regression-testing
> "visual-regression": https://github.com/topics/visual-regression
The superconductivity of layered graphene
Physicist here. The superconductivity in layered graphene is indeed surprisingly strange, but this popular article may not do it justice. Here are some older articles on the same topic that may be more informative:
https://www.quantamagazine.org/how-twisted-graphene-became-t...,
https://www.quantamagazine.org/a-new-twist-reveals-supercond....
Let me briefly say why some reasons this topic is so interesting. Electrons in a crystal always have both potential energy (electrical repulsion) and kinetic energy (set by the atomic positions and orbitals). The standard BCS theory of superconductivity only works well when the potential energy is negligible, but the most interesting superconductors --- probably including all high temperature ones like the cuprates --- are in the regime where potential energy is much stronger than kinetic energy. These are often in the class of "unconventional" superconductors where vanilla BCS theory does not apply. The superconductors in layered (and usually twisted) graphene lie in that same regime of large potential/kinetic energy. However, their 2d nature makes many types of measurements (and some types of theories) much easier. These materials might be the best candidate available to study to get a handle on how unconventional superconductivity "really works". (Besides superconductors, these same materials have oodles of other interesting phases of matter, many of which are quite exotic.)
While we have you, have any new theories or avenues of research come out of the lk99 stuff or was it completely just hype and known physics?
BCS == Bardeen–Cooper–Schrieffer [0].
Thank you for the additional info and links. This is why I love HN comments
Also physicist here. I've worked on conventional superconductors, but never on unconventional ones. Last I heard, it was believed to be mediated by magnons (rather than phonons). Who claims it is due to Coulomb interaction?
I think everything we don't have a model for is surprisingly strange. Gravity only seems "normal" because we've been teaching a reasonable model for it for hundreds of years - Aristotle thought things fell to the ground because that was "their nature", but thought it quite weird. X-Rays seem bonkers unless you've grown up with them, and there is something deeply unnerving about genetics, quantum and even GenAI until you've spent some time pulling apart the innards and building an explainable model that makes sense to you. And even then it can catch you out. More ways to explain the models help normalise it all - what's now taught at 9th grade used to be advanced post-doc research, in almost every field. And so it goes on.
2D superconductors don't make much sense because, as the article says, theory is behind experimentation here. That's also why there is both incredible excitement, but also a worry that none of this is going to stack up to anything more than a bubble. My old Uni (Manchester) doubled down hard on the work of Geim and Novoselov by building a dedicated "Graphene Institute", after they got the Nobel Prize, but even 15 years after that award most people are still trying to figure out what does it all actually mean really? Not just in terms of the theory of physics, but how useful is this stuff, in real world usage?
It'll settle down in due course. The model will become apparent, we'll be able to explain it through a series of bouncing back between theory and experiment, as ever, and then it won't seem so strange any more.
I'm not sure that'll ever be true of quantum computing for me, but then I am getting a bit older now...
> My old Uni (Manchester) doubled down hard on the work of Geim and Novoselov by building a dedicated "Graphene Institute", after they got the Nobel Prize, but even 15 years after that award most people are still trying to figure out what does it all actually mean really? Not just in terms of the theory of physics, but how useful is this stuff, in real world usage?
That's the beauty of real research. There's no guarantees it'll pan out. But it's generally worth doing and spending (sometimes decades of) time exploring. Too many people have become infatuated with instant gratification. It's pervasive even in young, scientific minds. The real gratification is failing that same test 100 times until you finally land on a variation that might work. And then figuring out why it worked.
Edit: And if that success never comes, the gratification is graduating and moving on to more solvable problems, but bringing with you the scientific methods you learned along the way. Scientists might spend their whole lives working on something that won't work and that's okay. If that isn't for you, go into product dev.
I still don’t believe explanations for gravity…let alone dark matter!
Not believing an explanation for dark matter seems prudent. It's just the name we've given to a certain set of observations not matching how we believe the universe works. We're still piecing together the details.
I also think that gravity is complete bunk. Funny how there's suddenly Graphene everywhere from dental applications to injectibles.Weird times.
Relevant xkcd tooltip: https://xkcd.com/1489
Do you suppose everything is explainable? Gravity feels to me like the sort of thing that just sort of is. I'm all for better characterizations of it, but I'm not holding my breath for an answer to: why?
Everything may be explainable, just not by humanity.
How does our biology affect the limits of what we can comprehend?
Oh plenty of ways probably. I expect there are perspectives out there which would have a look at our biggest questions and find them to be mundane with obvious answers but which would themselves boggle at concepts that we consider elementary.
But here I am in this body, and not that one, so I'm content to accept an axiom or two.
Gravity is something very simple. It cannot be a complex thing, because it demonstrates a very simple behavior.
There are plenty of cases where a complex thing has very simple behavior. For centuries, brewers could get away with a mental model for yeast as something that multiplies and converts sugar to alcohol until it can't anymore. It took quite a jump in technology to realize just how fantastically complex the inner workings of a yeast cell is.
I'm not proposing that gravity is underpinned by something complex, just that if its mechanism is out of our reach then so to are any conclusions about that mechanism's complexity.
the question is depth and quality of explainability as determined by the predictive power those explanations provide...
My interest with these overlapping lattices is the creation of fractional electric charges (Hall effect) and through, essentially, Moiré patterns. The angle of alignment will make a big effect.
Let me make an artifact to demonstrate… brb
https://claude.site/artifacts/f024844e-73c2-4eb1-afc9-401f3d...
Here you go. See how there are different densities and geometries at different angles. These lattice overlays can create fractional electrical charges — which is very strange — but how this affects super conductivity is unclear.
Very cool example!
Slightly different preprint variants:
This is exciting, sounds like new theory incoming (or possible way to test existing string/other theories?). I'd love to see PBS Spacetime or some other credible outlet explain the details of the experiment / implications for mere mortals.
Is this exactly 1.1 degrees?
Or is it 1.09955742876?
What I mean -- did they round up, is there some connection to universal constants?
Edit: I don't understand where the 1.1 degrees comes from. Why is it 1.1 and not something else...
It's not exactly a single ultra-specific 1.1000000... degree value only, just values approximately close to that. As to what the connection is: that's what the research is trying to unearth more about.
[dead]
[deleted]
Show HN: PulseBeam – Simplify WebRTC by Staying Serverless
WebRTC’s capabilities are amazing, but the setup headaches (signaling, connection/ICE failures, patchwork docs) can kill momentum. That’s why we built PulseBeam—a batteries-included WebRTC platform designed for developers who just want real-time features to work. What’s different? Built-in Signaling Built-in TURN Time limited JWT auth (serverless for production or use our endpoint for testing) Client and server SDKs included Free and open-source core If you’ve used libraries like PeerJS, PulseBeam should feel like home. We’re inspired by its simplicity. We’re currently in a developer-preview stage. We provide free signaling like PeerJS, and TURN up to 1GB. Of course, feel free to roast us
jupyter-collaboration is built on Y Documents (y.js, pycrdt, jupyter_ydoc,) https://github.com/jupyterlab/jupyter-collaboration
There is a y.js WebRTC adapter, but jupyter-collaboration doesn't have WebRTC data or audio or video support AFAIU.
y-webrtc: https://github.com/yjs/y-webrtc
Is there an example of how to do CRDT with PulseBeam WebRTC?
With client and serverside data validation?
> JWT
Is there OIDC support on the roadmap?
E.g. Google supports OIDC: https://developers.google.com/identity/openid-connect/openid...
W3C DIDs, VC Verifiable Credentials, and Blockcerts are designed for decentralization.
STUN, TURN, and ICE are NAT traversal workarounds FWIU; though NAT traversal isn't necessary if the client knowingly or unknowingly has an interface with a public IPV6 address due to IPV6 prefix delegation?
We don't have an example of CRDT with PulseBeam yet. But, CRDT itself is just a data structure, so you can use PulseBeam to communicate the sync ops (full or delta) with a data channel. Then, you can either use y.js or other CRDT libraries to manage the merging.
Yes, the plan is to use JWT for both the client and server side.
OIDC is not on the roadmap yet. But, I've been tinkering on the side related to this. I think something like an OIDC mapper to PulseBeam JWT can work here.
I'm not envisioning integrating into a decentralization ecosystem at this point. The scope is to provide a reliable service for 1:1 and small groups for other developers to build on top with centralization. So, something like Analytics, global segmented signaling (allow close peers to connect with edge servers, but allow them to connect to remote servers as well), authentication, and more network topology support.
That's correct, if the client is reachable by a public IPv6 (meaning the other peer has to also have a way to talk to an IPv6), then STUN and TURN are not needed. ICE is still needed but only used lightly for checking and selecting the candidate pair connections.
Elon Musk proposes putting the U.S. Treasury on blockchain for full transparency
FedNow supports ILP Interledger Protocol, which is an open spec that works with traditional ledgers and distributed cryptoasset ledgers.
> In addition to Peering, Clearing, and Settlement, ILP Interledger Protocol Specifies Addresses: https://news.ycombinator.com/item?id=36503888
>> ILP is not tied to a single company, payment network, or currency
ILP Addresses - v2.0.0 > Allocation Schemes: https://github.com/interledger/rfcs/blob/main/0015-ilp-addre...
People that argue for transaction privacy in blockchains: large investment banks, money launderers, the US Government when avoiding accountability because natsec.
Whereas today presumably there are database(s) of checks sent to contractors for the US Gvmt; and maybe auditing later.
Re: apparently trillions missing re: seasonal calls to "Audit the Fed! Audit DoD!" and "The Federal Funding Accountability and Transparency Act of 2006" which passed after Illinois started tracking grants: https://news.ycombinator.com/item?id=25893860
DHS helped develop W3C DIDs, which can be decentralizedly generated and optionally centrally registered or centrally generated and registered.
W3C Verifiable Credentials support DIDs Decentralized Identifiers.
Do not pay for closed source or closed spec capabilities; especially for inter-industry systems that would need to integrate around an API spec.
Do not develop another blockchain; given the government's inability to attract and retain talent in this space, it is unlikely that a few million dollars and government management would exceed the progress of billions invested in existing blockchains.
There's a lot of anti-blockchain FUD. Ask them to explain the difference between multi-primary SQL database synchronization system with off-site nodes (and Merkle hashes between rows), and a blockchain.
Why are there Merkle hashes in the centralized Trillian and now PostgreSQL databases that back CT Certificate Transparency logs (the logs of X.509 cert granting and revocations)?
Why did Google stop hosting a query endpoint for CT logs? How can single points of failure be eliminated in decentralized systems?
Blockchains are vulnerable to DoS Denial of Service like all other transaction systems. Adaptive difficulty and transaction fees that equitably go to miners or are just burnt are blockchain solutions to Denial of Service.
"Stress testing" to a web dev means something different than "stress testing" the banks of the Federal Reserve system, for example.
A webdev should know that as soon as your app runs out of (SQL) database connections, it will start throwing 500 Internal Server error. MySQL, for example, defaults to 150+1 max connections.
Stress testing for large banks does not really test for infosec resource exhaustion. Stress testing banks involves them making lots of typically large transactions; not lots of small transactions.
Web Monetization is designed to support micro payments, could support any ledger, and is built on ILP.
ILP makes it possible for e.g. 5x $100 transactions to be auditably grouped together. Normal, non bank of the US government payers must source liquidity from counter parties; which is easier to do with many smaller transactions.
Why do blockchains require additional counterparties in two party (payer-payee) transactions?
To get from USD to EUR, for example, sometimes it's less costly to go through CAD. Alice holds USD, Bob wants EUR, and Charlie holds CAD and EUR and accepts USD, but will only extend $100 of credit per party.
ripplenet was designed for that from the start. Interledger was contributed by ripplecorp to W3C as an open standard, and ILP has undergone significant revision since being open sources.
ILP does not require XRP, which - like XLM - is premined and has a transaction fee less than $0.01.
Ripplenet does not have Proof of Work mining: the list of transaction validator server IPs is maintained by pull request merge consensus in the GitHub repo.
The global Visa network claims to do something like 60,000 TPS. Bitcoin can do 6-7 TPS, and is even slower if you try and build it without blocks.
I thought I read that a stellar benchmark reached 10,000 TPS but they predicted that the TPS would be significantly greater with faster more expensive validation servers.
E.g. Crypto Kitties NFT smart contract game effectively DoS'd pre-sharding Ethereum, which originally did 15-30 TPS IIRC. Ethereum 2.0 reportedly intends to handles 100,000 TPS.
US Contractor payees would probably want to receive a stablecoin instead of a cryptoasset with high volatility.
Some citizens received a relief check to cash out or deposit, and others received a debit card for an account created for them.
I've heard that the relief loan program is the worst fraud in the history of the US government. Could any KYC or AML practices also help prevent such fraud? Does uploading a scan of a photo ID and/or routing and account numbers on a cheque make exchanges more accountable?
FWIU, only Canadian banks give customers the option to require approval for all deposits. Account holders do not have the option to deny deposits in the US, FWIU.
I don't think the US Government can acquire USDC. Awhile back stablecoin providers were audited and admonished.
A reasonable person should expect US Government backing of a cryptoasset to reduce volatility.
Large investment banks claimed to be saving the day on cryptoasset volatility.
High-frequency market makers claim to be creating value by creating liquidity at volatile prices.
They eventually added shorting to Bitcoin, which doesn't account for debt obligations; there is no debt within the Bitcoin network: either a transaction clears within the confirmation time or it doesn't.
There are no chargebacks in Bitcoin; a refund is an optional transaction between B and A, possibly with the same amount less fees.
There is no automatic rebilling in Bitcoin (and by extension other blockchains) because the payer does not disclose the private key necessary to withdraw funds in their account to payees.
Escrow can be done with multisig ("multi signature") transactions or with smart contracts; if at least e .g. 2 out of 3 parties approve, the escrowed transaction completes. So if Alice escrows $100 for Bob conditional upon receipt of a product from Bob, and Bob says she sent it and third-party Charlie says it was received, that's 2 out of 3 approving so Alice's $100 would then be sent to Bob.
All blockchains must eventually hard fork to PQ Post Quantum hashing and encryption, or keep hard forking to keep doubling non-PQ key sizes (if they are not already PQ).
PQ Post Quantum algos typically have a different number of characters, so any hard fork to PQ account keys and addresses will probably require changing data validation routines in webapps that handle transactions.
The coinbase field in a Bitcoin transaction struct can be used for correlating between blockchain transactions and rows in SQL database that claim to have valid data or metadata about a transaction; you put a unique signed value in the coinbase field when you create transactions, and your e.g. SQL or Accumulo database references the value stored in the coinbase field as a foreign key.
Crypto tax prep services can't just read transactions from public blockchains; they need exchange API access to get the price of the asset on that exchange at the time of that transaction: there's no on-chain price oracle.
"ILP Addresses, [Payment Pointers], and Blockcerts" https://github.com/blockchain-certificates/cert-issuer/issue... :
> How can or should a Blockcert indicate an ILP Interledger Protocol address or a Payment Pointer?
ILP Addresses:
g.acme.bob
g.us-fed.ach.0.acmebank.swx0a0.acmecorp.sales.199.~ipr.cdfa5e16-e759-4ba3-88f6-8b9dc83c1868.2
Payment Pointer -> URLS: $example.com -> https://example.com/.well-known/pay
$example.com/invoices/12345 -> https://example.com/invoices/12345
$bob.example.com -> https://bob.example.com/.well-known/pay
$example.com/bob -> https://example.com/bob
Revolutionizing software testing: Introducing LLM-powered bug catchers
ScholarlyArticle: "Mutation-Guided LLM-based Test Generation at Meta" (2025) https://arxiv.org/abs/2501.12862v1
Good call, thanks for linking the research paper directly here.
Can this unit test generation capability be connected to the models listed on the SWE-bench [Multimodal] leaderboard?
I'm currently working on running an agent through SWE-Bench (RA.Aid).
What do you mean by connecting the test generation capability to it?
Do you mean generating new eval test cases? I think it could potentially have a use there.
OpenWISP: Multi-device fleet management for OpenWrt routers
Anything similar for opnsense (besides their own service) or pfsense?
Maybe just go with ansible or similar: https://github.com/ansibleguy/collection_opnsense
Updating a fleet of embedded devices like routers (which can come online and go offline at any time) will generally be much easier using a pull-based update model. But if you’ve got control over the build and update lifecycle, a push-based approach like ansible might be appropriate.
Maybe I am missing somehing, but I would assume that base network infrastructure like routers, firewalls and switches have a higher uptime, availability and reliability than ordinary servers.
The problem with push is that the service sitting at the center needs to figure out which devices will need to be re-pushed later on. You can end up with a lot of state that needs action just to get things back to normal.
So if you can convince devices to pull at boot time and then regularly thereafter, you know that the three states they can be in are down, good, or soon to be good. Now you only need to take action when things are down.
Never analyze distribution of software and config based on the perfect state; minimize the amount of work you need to do for the exceptions.
Unattended upgrades fail and sit there requiring manual intervention (due to lack of transactional updates and/or multiple flash slots (root partitions and bootloader configuration)).
Pull style configuration requires the device to hold credentials in order to authorize access to download the new policy set.
It's possible to add an /etc/init.d that runs sysupgrade on boot, install Python and Ansible, configure and confirm remote logging, and then run `ansible-pull`.
ansible-openwrt eliminates the need to have Python on a device: https://github.com/gekmihesg/ansible-openwrt
But then log collection; unless all of the nodes have correctly configured log forwarding at each stage of firmware upgrade, pull-style configuration management will lose logs that push-style configuration management can easily centrally log.
Pull based updates would work on OpenWRT devices if they had enough storage, transactional updates and/or multiple flash slots, and scheduled maintenance windows.
OpenWRT wiki > Sysupgrade: https://openwrt.org/docs/techref/sysupgrade
Calculating Pi in 5 lines of Python
> Infinite series can't really be calculated to completion using a computer,
The sum of an infinite divergent series cannot be calculated with or without a computer.
The sum of an infinite convergent series can be calculated with:
1/(a-r)
Sequence > Limits and convergence:
https://en.wikipedia.org/wiki/Sequence#Limits_and_convergenc...Limit of a sequence: https://en.wikipedia.org/wiki/Limit_of_a_sequence
SymPy docs > Limits of Sequences: https://docs.sympy.org/latest/modules/series/limitseq.html
> Provides methods to compute limit of terms having sequences at infinity.
Madhava-Leibniz formula for π: https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80
Eco-friendly artificial muscle fibers can produce and store energy
> The team utilized poly(lactic acid) (PLA), an eco-friendly material derived from crop-based raw materials, and highly durable bio-based thermoplastic polyurethane (TPU) to develop the artificial muscle fibers that mimic the functional and real muscles.
"Energy harvesting and storage using highly durable Biomass-Based artificial muscle fibers via shape memory effect" (2025) https://www.sciencedirect.com/science/article/abs/pii/S13858...
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
>> 583 Wh/kg
But just graphene presumably doesn't work in these applications due to lack of tensilility like certain natural fibers?
Harmonic Loss Trains Interpretable AI Models
"Harmonic Loss Trains Interpretable AI Models" (2025) https://arxiv.org/abs/2502.01628
Src: https://github.com/KindXiaoming/grow-crystals :
> What is Harmonic Loss?
Cross Entropy: https://en.wikipedia.org/wiki/Cross-entropy
XAI: Explainable AI > Interpretability: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Right to explanation: https://en.wikipedia.org/wiki/Right_to_explanation
Government planned it 7 years, beavers built a dam in 2 days and saved $1M
Oracle justified its JavaScript trademark with Node.js–now it wants that ignored
Calling EMCAScript JavaScript was a huge mistake that is still biting us.
OR alternatively, just not coming up with a better alternative name that ECMAScript. If there was a catchier alternative name that was less awkward to pronounce, people might more happily have switched over.
"JS" because of the .js file extension.
ECMAScript version history: https://en.wikipedia.org/wiki/ECMAScript_version_history
"Java" is an island in Indonesia associated with coffee beans from the Dutch East Indies that Sun Microsystems named their portable software after.
Coffee production in Indonesia: https://en.wikipedia.org/wiki/Coffee_production_in_Indonesia... :
> Certain estates age a portion of their coffee for up to five years, normally in large burlap sacks, which are regularly aired, dusted, and flipped.
Build your own SQLite, Part 4: reading tables metadata
It's interesting to compare this series to the actual source code of sqlite. For example, sqlite uses a LALR parser generator: https://github.com/sqlite/sqlite/blob/master/src/parse.y#L19...
And queries itself to get the schema: https://github.com/sqlite/sqlite/blob/802b042f6ef89285bc0e72...
Lots of questions, but the main one is whether we have made any progress with these new toolchains and programming languages w/ respect to performance or robustness. And that may be unfair to ask of what is a genuinely useful tutorial.
If you don’t know it already, you’ll probably be interested in limbo: https://github.com/tursodatabase/limbo
It’s much more ambitious/complete than the db presented in the tutorial.
If memory serves me correctly, it uses the same parser generator as SQLite, which may answer some your questions.
Is translation necessary to port the complete SQLite test suite?
sqlite/sqlite//test: https://github.com/sqlite/sqlite/tree/master/test
tursodatabase/limbo//testing: https://github.com/tursodatabase/limbo/tree/main/testing
ArXiv LaTeX Cleaner: Clean the LaTeX code of your paper to submit to ArXiv
It's really a pity that they do this now. Some of their older papers had actually quite some valuable information, comments, discussions, thoughts, even commented out sections, figures, tables in it. It gave a much better view on how the paper was written over time, or how even the work processed over time. Sometimes you also see some alternative titles being discussed, which can be quite funny.
E.g. from https://arxiv.org/abs/1804.09849:
%\title{Sequence-to-Sequence Tricks and Hybrids\\for Improved Neural Machine Translation} % \title{Mixing and Matching Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Analyzing and Optimizing Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Frankenmodels for Improved Neural Machine Translation} % \title{Optimized Architectures and Training Strategies\\for Improved Neural Machine Translation} % \title{Hybrid Vigor: Combining Traits from Different Architectures Improves Neural Machine Translation}
\title{The Best of Both Worlds: \\Combining Recent Advances in Neural Machine Translation\\ ~}
Also a lot of things in the Attention is all you need paper: https://arxiv.org/abs/1706.03762v1
Maybe papers need to be put under version control.
Quantum Bayesian Inference with Renormalization for Gravitational Waves
ScholarlyArticle: "Quantum Bayesian Inference with Renormalization for Gravitational Waves" (2025) https://iopscience.iop.org/article/10.3847/2041-8213/ada6ae
NewsArticle: "Black Holes Speak in Gravitational Waves, Heard Through Quantum Walks" (2025) https://thequantuminsider.com/2025/01/29/black-holes-speak-i... :
> Unlike classical MCMC, which requires a large number of iterative steps to converge on a solution, QBIRD uses a quantum-enhanced Metropolis algorithm that incorporates quantum walks to explore the parameter space more efficiently. Instead of sequentially evaluating probability distributions one step at a time, QBIRD encodes the likelihood landscape into a quantum Hilbert space, allowing it to assess multiple transitions between parameter states simultaneously. This is achieved through a set of quantum registers that track state evolution, transition probabilities, and acceptance criteria using a modified Metropolis-Hastings rule.
> Additionally, QBIRD incorporates renormalization and downsampling, which progressively refine the search space by eliminating less probable regions and concentrating computational resources on the most likely solutions. These techniques enable QBIRD to achieve accuracy comparable to classical MCMC while reducing the number of required samples and computational overhead, making it a more promising approach for gravitational wave parameter estimation as quantum hardware matures.
Parameter estimation algorithms:
"Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 .. https://news.ycombinator.com/item?id=40396171
"Robustly learning Hamiltonian dynamics of a superconducting quantum processor" (2024) https://www.nature.com/articles/s41467-024-52629-3 .. https://news.ycombinator.com/item?id=42086445
Can Large Language Models Emulate Judicial Decision-Making? [Paper]
An actor can emulate the communication style of judicial decision language, sure.
But the cost of a wrong answer (wrongful conviction) exceeds a threshold of ethical use.
> We try prompt engineering techniques to spur the LLM to act more like human judges, but with no success. “Judge AI” is a formalist judge, not a human judge.
From "Asking 60 LLMs a set of 20 questions" https://news.ycombinator.com/item?id=37451642 :
> From https://news.ycombinator.com/item?id=36038440 :
>> Awesome-legal-nlp links to benchmarks like LexGLUE and FairLex but not yet LegalBench; in re: AI alignment and ethics / regional law
>> A "who hath done it" exercise
>> "For each of these things, tell me whether Gdo, Others, or You did it"
AI should never be judge, jury, and executioner.
Homotopy Type Theory
type theory notes: https://news.ycombinator.com/item?id=42440016#42444882
HoTT in Lean 4: https://github.com/forked-from-1kasper/ground_zero
I Wrote a WebAssembly VM in C
This is great! The WebAssembly Core Specification is actually quite readable, although some of the language can be a bit intimidating if you're not used to reading programming language papers.
If anyone is looking for a slightly more accessible way to learn WebAssembly, you might enjoy WebAssembly from the Ground Up: https://wasmgroundup.com
(Disclaimer: I'm one of the authors)
I know one of WebAssembly's biggest features by design is security / "sandbox".
But I've always gotten confused with... it is secure because by default it can't do much.
I don't quite understand how to view WebAssembly. You write in one language, it compiles things like basic math (nothing with network or filesystem) to another and it runs in an interpreter.
I feel like I have a severe lack/misunderstanding. There's a ton of hype for years, lots of investment... but it isn't like any case where you want to add Lua to an app you can add WebAssembly/vice versa?
WebAssembly can communicate through buffers. WebAssembly can also import foreign functions (Javascript functions in the browser).
You can get output by reading the buffer at the end of execution/when receiving callbacks. So, for instance, you pass a few frames worth of buffers to WASM, WASM renders pixels into the buffers, calls a callback, and the Javascript reads data from the buffer (sending it to a <canvas> or similar).
The benefit of WASM is that it can't be very malicious by itself. It requires the runtime to provide it with exported functions and callbacks to do any file I/O, network I/O, or spawning new tasks. Lua and similar tools can go deep into the runtime they exist in, altering system state and messing with system memory if they want to, while WASM can only interact with the specific API surface you provide it.
That makes WASM less powerful, but more predictable, and in my opinion better for building integrations with as there is no risk of internal APIs being accessed (that you will be blamed for if they break in an update).
WASI Preview 1 and WASI Preview 2 can do file and network I/O IIUC.
Re: tty support in container2wasm and fixed 80x25 due to lack of SIGWINCH support in WASI Preview 1: https://github.com/ktock/container2wasm/issues/146
The File System Access API requires granting each app access to each folder.
jupyterlab-filesystem-access only works with Chromium based browsers, because FF doesn't support the File System Access API: https://github.com/jupyterlab-contrib/jupyterlab-filesystem-...
The File System Access API is useful for opening a local .ipynb and .csv with JupyterLite, which builds CPython for WASM as Pyodide.
There is a "Direct Sockets API in Chrome 131" but not in FF; so WebRTC and WebSocket relaying is unnecessary for WASM apps like WebVM: https://news.ycombinator.com/item?id=42029188
WASI Preview 2: https://github.com/WebAssembly/WASI/blob/main/wasip2/README.... :
> wasi-io, wasi-clocks, wasi-random, wasi-filesystem, wasi-sockets, wasi-cli, wasi-http
US bill proposes jail time for people who download DeepSeek
That would disincentive this type of research, for example:
"DeepSeek's Hidden Bias: How We Cut It by 76% Without Performance Loss" (2025) https://news.ycombinator.com/item?id=42868271
https://news.ycombinator.com/item?id=42891042
TIL about BBQ: Bias Benchmark for QA
"BBQ: A Hand-Built Bias Benchmark for Question Answering" (2021) https://arxiv.org/abs/2110.08193
Waydroid – Android in a Linux container
Surprised to see this on the frontpage - it's a well known piece of software.
It's unfortunate that there are no Google-vended images (e.g. the generic system image) that run on Waydroid. Typing my password into random ROMs from the internet sketches me out.
I wouldn't say it runs a "random ROM from the internet" - LineageOS is a very well-established project and is fully FOSS (free and open source software) except for firmware necessary for specific devices. It is the natural choice for any project, such as Waydroid, that requires good community support and ongoing availability.
Over a number of years, Google have progressively removed many of the original parts of AOSP (the FOSS foundation upon which Android is based), which means that alternative components have to be developed by projects like LineageOS. In spite of this, I suspect that LineageOS makes fewer modifications to AOSP than most phone vendors do, including Google themselves!
A few things that seem like they're consistently missing from these projects: Hardware 3d acceleration from the host in a version of OpenGL ES + Vulkan that most phones have natively. Lastly, many apps have built-in ways of detecting that they're not running on a phone and ditch out (looking at cpuinfo and referencing that with the purported device being run).
It also seems that expected arm support on device is increasing (along with expected throughput) and that the capability of the x86 host device you need to emulate even a modest modern mobile ARM soc is getting higher and higher.
Lastly, the android version supported is almost always 3-4 generations behind the current Android. Apps are quick to drop legacy android support or run with fewer features/less optimizations on older versions of the OS. The android base version in this project is from 2020.
Anecdotally, using bluestacks (which indisputably has the most compatible and optimized emulation stack in the entire space) with a 7800X3D / RTX 3090 still runs most games slower than a snapdragon 8 phone from yesteryear running natively.
virtio-gpu rutabaga was recently added to QEMU IIUC mostly by Google for Chromebook Android emulation or Android Studio or both?
virtio-gpu-rutabaga: https://www.qemu.org/docs/master/system/devices/virtio-gpu.h...
Rutabaga Virtual Graphics Interface: https://crosvm.dev/book/appendix/rutabaga_gfx.html
gfxstream: https://android.googlesource.com/platform/hardware/google/gf...
"Gfxstream Merged Into Mesa For Vulkan Virtualization" (2024-09) https://www.phoronix.com/news/Mesa-Gfxstream-Merged
I don't understand why there is not an official x86 container / ROM for Android development? Do CI builds of Android apps not run tests with recent versions of Android? How do CI builds of APKs run GUI tests without an Android container?
There is no official support for x86 in android any more - the Android-x86 project was the last I know that supported/maintained it. Last release was 2022.
For apps that use Vulkan natively, it's easy - but many still use and rely on OpenGL ES. It's a weird scenario where you have apps that are now supporting Vulkan, but they have higher minimum OS requirements as a result... but those versions of Android aren't supported by these type of projects.
37D boundary of quantum correlations with a time-domain optical processor
"Exploring the boundary of quantum correlations with a time-domain optical processor" (2025) https://www.science.org/doi/10.1126/sciadv.abd8080 .. https://arxiv.org/abs/2208.07794v3 :
> Abstract: Contextuality is a hallmark feature of the quantum theory that captures its incompatibility with any noncontextual hidden-variable model. The Greenberger--Horne--Zeilinger (GHZ)-type paradoxes are proofs of contextuality that reveal this incompatibility with deterministic logical arguments. However, the GHZ-type paradox whose events can be included in the fewest contexts and which brings the strongest nonclassicality remains elusive. Here, we derive a GHZ-type paradox with a context-cover number of three and show this number saturates the lower bound posed by quantum theory. We demonstrate the paradox with a time-domain fiber optical platform and recover the quantum prediction in a 37-dimensional setup based on high-speed modulation, convolution, and homodyne detection of time-multiplexed pulsed coherent light. By proposing and studying a strong form of contextuality in high-dimensional Hilbert space, our results pave the way for the exploration of exotic quantum correlations with time-multiplexed optical systems.
New thermogalvanic tech paves way for more efficient fridges
"Solvation entropy engineering of thermogalvanic electrolytes for efficient electrochemical refrigeration" (2025) https://www.cell.com/joule/fulltext/S2542-4351(25)00003-0
Thanks for that, it's great that its 10x better than the previous best effort. It's notable that it needs to get 20x better again before it starts to have useful applications :-).
Someday maybe!
Quantum thermal diodes: https://news.ycombinator.com/item?id=42537703
From https://news.ycombinator.com/item?id=38861468 :
> Laser cooling: https://en.wikipedia.org/wiki/Laser_cooling
> Cooling with LEDs in reverse: https://issuu.com/designinglighting/docs/dec_2022/s/17923182 :
> "Near-field photonic cooling through control of the chemical potential of photons" (2019) https://www.nature.com/articles/s41586-019-0918-8
High hopes, low expectations :-). That said, 'quantum thermal diode' sounds like something that breaks on your space ship and if you don't fix it you won't be able to get the engines online to outrun the aliens. Even after reading the article I think the unit of measure there should be milli-demons in honor of Maxwell's Demon.
Emergence of a second law of thermodynamics in isolated quantum systems
ScholarlyArticle: "Emergence of a Second Law of Thermodynamics in Isolated Quantum Systems" (2025) https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan...
NewsArticle: "Even Quantum Physics Obeys the Law of Entropy" https://www.tuwien.at/en/tu-wien/news/news-articles/news/auc...
NewsArticle: "Sacred laws of entropy also work in the quantum world, suggests study" ... "90-year-old assumption about quantum entropy challenged in new study" https://interestingengineering.com/science/entropy-also-work...
Polarization-dependent photoluminescence of Ce-implanted MgO and MgAl2O4
NewsArticle: "Scientists explore how to make quantum bits with spinel gemstones" (2025) https://news.uchicago.edu/story/scientists-explore-how-make-... :
> A type of gemstone called spinel can be used to store quantum information, according to new research from a collaboration involving University of Chicago, Tohoku University, and Argonne National Laboratory.
3D scene reconstruction in adverse weather conditions via Gaussian splatting
Large Language Models for Mathematicians (2023)
It makes sense for LLMs to work with testable code for symbolic mathematics; CAS Computer Algebra System code instead of LaTeX which only roughly corresponds.
Are LLMs training on the AST parses of the symbolic expressions, or token coocurrence? What about training on the relations between code and tests?
Benchmarks for math and physics LLMs: FrontierMath, TheoremQA, Multi SWE-bench: https://news.ycombinator.com/item?id=42097683
Large language models think too fast to explore effectively
Maps well to Kahneman's "Thinking Fast and Slow" framework
system 1 thinking for early layer processing of uncertainty in LLMs. quick, intuitive decisions, focuses on uncertainty, happens in early transformer layers.
system 2 thinking for later layer processing of empowerment (selecting elements that maximize future possibilities). strategic, deliberate evaluation, considering long-term possibilities, happens in later layers.
system 1 = 4o/llama 3.1
system 1 + system 2 = o1/r1 reasoning models
empowerment calculation seems possibly oversimplified - assumes a static value for elements over a dynamic context-dependent empowerment
interesting that higher temperatures improved performance slightly for system 1 models although they still made decisions before empowerment information could influence them
edit: removed the word "novel". The paper shows early-layer processing of uncertainty vs later-layer processing of empowerment.
Stanovich proposes a three tier model http://keithstanovich.com/Site/Research_on_Reasoning_files/S...
Modeling an analog system like human cognition into any number of discrete tiers is inherently kind of arbitrary and unscientific. I doubt you could ever prove experimentally that all human thinking works through exactly two or three or ten or whatever number of tiers. But it's at least possible that a three-tier model facilitates building AI software which is "good enough" for many practical use cases.
Funny you should say that because an American guy did that a hundred years ago and nailed it.
He divided reasoning into the two categories corollarial and theorematic.
Charles Sanders Peirce > Pragmatism > Theory of inquiry > Scientific Method: https://en.wikipedia.org/wiki/Charles_Sanders_Peirce#Scienti...
Peirce’s Deductive Logic: https://plato.stanford.edu/entries/peirce-logic/
Scientific Method > 2. Historical Review: Aristotle to Mill https://plato.stanford.edu/entries/scientific-method/#HisRev...
Scientific Method: https://en.wikipedia.org/wiki/Scientific_method
Reproducibility: https://en.wikipedia.org/wiki/Reproducibility
Replication crisis: https://en.wikipedia.org/wiki/Replication_crisis
TFS > Replication crisis https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
"Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35879007 :
Lateralization of brain function: https://en.wikipedia.org/wiki/Lateralization_of_brain_functi...
Ask HN: Percent of employees that benefit financially from equity offers?
Title says it all. Equity offers are a very common thing in tech. I don't personally know anyone who has made money from equity offers, though nearly all my colleagues have received them at some point.
Does anyone have real data on how many employees actually see financial upside from equity grants? Are there studies or even anecdotal numbers on how common it is for non-executives/non-founders to walk away with any money? Specifically talking about privately held US startups.
From https://news.ycombinator.com/item?id=29141796 :
> There are a number of options/equity calculators:
> https://tldroptions.io/ ("~65% of companies will never exit", "~15% of companies will have low exits", "~20% of companies will make you money")
Ultra-fast picosecond real-time observation of optical quantum entanglement
"Real-time observation of picosecond-timescale optical quantum entanglement towards ultrafast quantum information processing" (2025) https://www.nature.com/articles/s41566-024-01589-7 .. https://arxiv.org/abs/2403.07357v1 (2024) :
> Abstract: Entanglement is a fundamental resource for various optical quantum information processing (QIP) applications. To achieve high-speed QIP systems, entanglement should be encoded in short wavepackets. Here we report the real-time observation of ultrafast optical Einstein–Podolsky–Rosen correlation at a picosecond timescale in a continuous-wave system. Optical phase-sensitive amplification using a 6-THz-bandwidth waveguide-based optical parametric amplifier enhances the effective efficiency of 70-GHz-bandwidth homodyne detectors, mainly used in 5G telecommunication, enabling its use in real-time quantum state measurement. Although power measurement using frequency scanning, such as an optical spectrum analyser, is not performed in real time, our observation is demonstrated through the real-time amplitude measurement and can be directly used in QIP applications. The observed Einstein–Podolsky–Rosen states show quantum correlation of 4.5 dB below the shot-noise level encoded in wavepackets with 40 ps period, equivalent to 25 GHz repetition—103 times faster than previous entanglement observation in continuous-wave systems. The quantum correlation of 4.5 dB is already sufficient for several QIP applications, and our system can be readily extended to large-scale entanglement. Moreover, our scheme has high compatibility with optical communication technology such as wavelength-division multiplexing, and femtosecond-timescale observation is also feasible. Our demonstration is a paradigm shift in accelerating accessible quantum correlation—the foundational resource of all quantum applications—from the nanosecond to picosecond timescales, enabling ultrafast optical QIP.
Recipe Database with Semantic Search on Digital Ocean's Smallest VM
Datasette-lite supports SQLite in WASM in a browser.
DuckDB WASM would also solve for a recipe database without an application server, for example in order to reduce annual hosting costs of a side project.
Is the scraped data CC-BY-SA licensed? Attribution would be good.
/? datasette vector search
FAISS
WASM vector database: https://www.google.com/search?q=WASM+vector+database
Moiré-driven topological electronic crystals in twisted graphene
ScholarlyArticle: "Moiré-driven topological electronic crystals in twisted graphene" (2025) https://www.nature.com/articles/s41586-024-08239-6
NewsArticle: "Anomalous Hall crystal made from twisted graphene" (2025) https://physicsworld.com/a/anomalous-hall-crystal-made-from-...
Adding concurrent read/write to DuckDB with Arrow Flight
Just sanity checking here - with flight write streams to duckdb, I'm guessing there is no notion of transactional boundary here, so if we want data consistency during reads, that's another level of manual app responsibilities? And atomicity is there, but at the single record batch or row group level?
Ex: if we have a streaming financial ledger as 2 tables, that is 2 writes, and a reader might see an inconsistent state of only 1 write
Ex: streaming ledger as one table, and the credit+debit split into 2 distanced rowgroups, same inconsistency?
Ex: in both cases, we might have the server stream back an ack of what was written, so we could at least get a guarantee of which timestamps are fully written for future reads, and queries can manually limit to known-complete intervals
We are looking at adding streaming writes to GFQL, an open source columnar (arrow-native) CPU/GPU graph query language, where this is the same scenario: appends mean updating both the nodes table and the edges table
Yes, reading this post (working around a database's concurrency control) made me raise an eyebrow. If you are ok with inconsistent data then that's fine. Or if you handle consistency at a higher level that's fine too. But if either of these are the case why would you be going through DuckDB? You could write out Parquet files directly?
cosmos/iavl is a Merkleized AVL tree.
https://github.com/cosmos/iavl :
> Merkleized IAVL+ Tree implementation in Go
> The purpose of this data structure is to provide persistent storage for key-value pairs (say to store account balances) such that a deterministic merkle root hash can be computed. The tree is balanced using a variant of the AVL algorithm so all operations are O(log(n)).
Integer Vector clock or Merkle hashes?
Why shouldn't you store account balances in git, for example?
Or, why shouldn't you append to Parquet or Feather and LZ4 for strongly consistent transactional data?
Centralized databases can have Merkle hashes, too;
"How Postgres stores data on disk" https://news.ycombinator.com/item?id=41163785 :
> Those systems index Parquet. Can they also index Feather IPC, which an application might already have to journal and/or log, and checkpoint?
DLT applications for strong transactional consistency sign and synchronize block messages and transaction messages.
Public blockchains have average transaction times and costs.
Private blockchains also have TPS Transactions Per Second metrics, and unknown degrees of off-site redundancy for consistent storage with or without indexes.
Blockchain#Openness: https://en.wikipedia.org/wiki/Blockchain#Openness :
> An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain. [46][47][48][49][50] Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases. [51] Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain. [52]
> Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision. [46][48] Nikolai Hampton of Computerworld said that "many in-house blockchain solutions will be nothing more than cumbersome databases," and "without a clear security model, proprietary blockchains should be eyed with suspicion." [10][53]
Merkle Town: https://news.ycombinator.com/item?id=38829274 :
> How CT works > "How CT fits into the wider Web PKI ecosystem": https://certificate.transparency.dev/howctworks/
From "PostgreSQL Support for Certificate Transparency Logs Now Available" https://news.ycombinator.com/item?id=42628223 :
> Are there Merkle hashes between the rows in the PostgreSQL CT store like there are in the Trillian CT store?
> Sigstore Rekor also has centralized Merkle hashes.
I think you replied in the wrong post.
No, I just explained how the world does strongly consistent distributed databases for transactional data, which is the exact question here.
DuckDB does not yet handle strong consistency. Blockchains and SQL databases do.
Blockchains are a fantastic way to run things slowly ;-) More seriously: Making crypto fast does sound like a fun technical challenge, but well beyond what our finance/gov/cyber/ai etc customers want us to do.
For reference, our goal here is to run around 1 TB/s per server, and many times more when a beefier server. Same tech just landed at spot #3 on the graph 500 on its first try.
To go even bigger & faster, we are looking for ~phd intern fellows to run on more than one server, if that's your thing: OSS GPU AI fellowship @ https://www.graphistry.com/careers
The flight perspective aligns with what we're doing. We skip the duckdb CPU indirections (why drink through a long twirly straw?) and go straight to arrow on GPU RAM. For our other work, if duckdb does gives reasonable transactional guarantees here, that's interesting... hence my (in earnest) original question. AFAICT, the answers are resting on operational answers & docs that don't connect to how we normally talk about databases giving you answers on consistent vs inconsistent views of data.
Do you think that blockchain engineers are incapable of developing high throughout distributed systems due to engineering incapacity or due to real limits to how fast a strongly consistent, sufficiently secured cryptographic distributed system can be? Are blockchain devs all just idiots, or have they dumbly prioritized data integrity because that doesn't matter it's about big data these days, nobody needs CAP?
From "Rediscovering Transaction Processing from History and First Principles" https://news.ycombinator.com/item?id=41064634 :
> metrics: Real-Time TPS (tx/s), Max Recorded TPS (tx/s), Max Theoretical TPS (tx/s), Block Time (s), Finality (s)
> Other metrics: FLOPS, FLOPS/WHr, TOPS, TOPS/WHr, $/OPS/WHr
TB/s in query processing of data already in RAM?
/? TB/s "hnlog"
- https://news.ycombinator.com/item?id=40423020 , [...] :
> The HBM3E Wikipedia article says 1.2TB/s.
> Latest PCIe 7 x16 says 512 GB/s:
fiber optics: 301 TB/s (2024-05)
Cerebras: https://en.wikipedia.org/wiki/Cerebras :
WSE-2 on-chip SRAM memory bandwidth: 20 PB/s / 220 PB/S
WSE-3: 21 PB/S
HBM > Technology: https://en.wikipedia.org/wiki/High_Bandwidth_Memory#Technolo... :
HBM3E: 9.8 Gbit/s , 1229 Gbyte/s (2023)
HBM4: 6.4 Gbit/s , 1638 Gbyte/s (2026)
LPDDR SDRAM > Generations: https://en.wikipedia.org/wiki/LPDDR#Generations :
LPDDR5X: 1,066.63 MB/S (2021)
GDDR7: https://en.m.wikipedia.org/wiki/GDDR7_SDRAM
GDDR7: 32 Gbps/pin - 48 Gbps/pin,[11] and chip capacities up to 64 Gbit, 192 GB/s
List of interface bit rates: https://en.wikipedia.org/wiki/List_of_interface_bit_rates :
PCIe7 x16: 1.936 Tbit/s 242 GB/s (2025)
800GBASE-X: 800 Gbps (2024)
DDR5-8800: 70.4 GB/s
Bit rate > In data communications: https://en.wikipedia.org/wiki/Bit_rate# In_data_communications ; Gross and Net bit rate, Information rate, Network throughout, Goodput
Re: TPUs, NPUs, TOPS: https://news.ycombinator.com/item?id=42318274 :
> How many TOPS/W and TFLOPS/W? (T [Float] Operations Per Second per Watt (hour ?))*
Top 500 > Green 500: https://www.top500.org/lists/green500/2024/11/ :
PFlop/s (Rmax)
Power (kW)
GFlops/watts (Energy Efficiency)
Performance per watt > FLOPS/watts: https://en.wikipedia.org/wiki/Performance_per_watt#FLOPS_per...
Electrons: 50%–99% of c the speed of light ( Speed of electricity: https://en.wikipedia.org/wiki/Speed_of_electricity , Velocity factor of a CAT-7 cable: https://en.wikipedia.org/wiki/Velocity_factor#Typical_veloci... )
Photons: c (*)
Gravitational Waves: Even though both light and gravitational waves were generated by this event, and they both travel at the same speed, the gravitational waves stopped arriving 1.7 seconds before the first light was seen ( https://bigthink.com/starts-with-a-bang/light-gravitational-... )
But people don't do computation with gravitational waves.
To a reasonable rounding error.. yes
How would you recommend that appends to Parquet files be distributedly synchronized with zero trust?
Raft, Paxos, BFT, ... /? hnlog paxos ... this about "50 years later, is two-phase locking the best we can do?" https://news.ycombinator.com/item?id=37712506
To have consensus about protocol revisions; To have data integrity and consensus about the merged sequence of data in database {rows, documents, named graphs, records,}.
"We're building a new static type checker for Python"
Could it please also do runtime type checking?
(PyContracts and iContract do runtime type checking, but it's not very performant.)
That MyPy isn't usable at runtime causes lots of re-work.
Have you tried beartype? It's worked well for me and has the least overhead of any other runtime type checker.
I think TypeGuard (https://github.com/agronholm/typeguard) also does runtime type checking. I use beartype BTW.
pycontracts: https://github.com/AlexandruBurlacu/pycontracts
icontract: https://github.com/Parquery/icontract
The DbC Design-by-Contract patterns supported by icontract probably have code quality returns beyond saving work.
Safety critical coding guidelines specify that there must be runtime type and value checks at the top of every function.
Gravitational Communication: Fundamentals, State-of-the-Art and Future Vision
"Communicating with Gravitational Waves" https://www.universetoday.com/170685/communicating-with-grav... :
> What’s promising about gravitational wave communication (GWC) is that it could overcome these challenges. GWC is robust in extreme environments and loses minimal energy over extremely long distances. It also overcomes problems that plague electromagnetic communication (EMC), like diffusion, distortion, and reflection. There’s also the intriguing possibility of harnessing naturally created GWs, which means reducing the energy needed to create them.
Like backscatter with gravitational waves?
Re Gravitational Wave detectors: https://news.ycombinator.com/item?id=41632710 :
>> Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
ScholarlyArticle: "Gravitational Communication: Fundamentals, State-of-the-Art and Future Vision" (2024) https://arxiv.org/abs/2501.03251
Tapping into the natural aromatic potential of microbial lignin valorization
Transparent wood composite > Delignification process: https://en.wikipedia.org/wiki/Transparent_wood_composite#Del... :
> The production of transparent wood from the delignification process vary study by study. However, the basics behind it are as follows: a wood sample is drenched in heated (80 °C–100 °C) solutions containing sodium chloride, sodium hypochlorite, or sodium hydroxide/sulfite for about 3–12 hours followed by immersion in boiling hydrogen peroxide.[15] Then, the lignin is separated from the cellulose and hemicellulose structure, turning the wood white and allowing the resin penetration to start. Finally, the sample is immersed in a matching resin, usually PMMA, under high temperatures (85 °C) and a vacuum for 12 hours.[15] This process fills the space previously occupied by the lignin and the open wood cellular structure resulting in the final transparent wood composite.
How could transparent wood production methods be sustainably scaled?
Is the lignin extracted in transparent wood production usable for valorization?
"Lignin valorization: Status, challenges and opportunities" (2022) https://www.sciencedirect.com/science/article/abs/pii/S09608... :
> Most of the research on lignin valorization has been done on lignin from pulp and paper industries (Bruijnincx et al., 2015, Reshmy et al., 2022) The advantage of using lignin from those facilities is that the resource is already centralized and the transportation costs to further process are significantly less
Freezing CPU intensive background tabs in Chrome
How could browsers let app developers know that their app requires excessive resources?
From https://news.ycombinator.com/item?id=40861851 :
>> - [ ] UBY: Browsers: Strobe the tab or extension button when it's beyond (configurable) resource usage thresholds
>> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs according to their relative resource utilization
>> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU,
X/Freedesktop.org Encounters New Cloud Crisis: Needs New Infrastructure
> Equinix had been sponsoring three AMD EPYC 7402P servers and another three dual Intel Xeon Silver 4214 servers for running the FreeDesktop.org GitLab cluster. Plus for GitLab runners there are three AMD EPYC 7502P servers and two Ampere Altra 80-core servers.
> The FreeDesktop.org GitLab burns through around 50TB of bandwidth per month.
Is there an estimate of the monthly and annual costs?
List of display servers: https://en.wikipedia.org/wiki/List_of_display_servers
Who's profiting from X/Freedesktop.org?
(Screenkey, xrandr, and xgamma don't work on Wayland)
Sommelier is Wayland based.
XQuartz hasn't had a release since 2023 and doesn't yet support HiDPI (so X apps on Mac are line-doubled). Shouldn't they be merging fixes?
DOT rips up US fuel efficiency regulations [pdf]
What is the AQI where they live?
And a pipeline through my f heart.
From https://en.wikipedia.org/wiki/United_States_offshore_drillin... :
> In 2018, a new federal initiative to expand offshore drilling suddenly excluded Florida, but although this would be favored by Floridians, concerns remained about the basis for that apparently arbitrary exception being merely politically motivated and tentative. No scientific, military, or economic basis for the decision was given, provoking continuing public concern in Florida.[11]
Why not Florida?
> In 2023, President Biden signed a Memorandum of March 13, 2023 prohibiting oil and gas leasing in certain arctic areas of the Outer Continental Shelf (Withdrawal of Certain Areas off the United States Arctic Coast of the Outer Continental Shelf from Oil or Gas Leasing). However, in January 2025
From https://coast.noaa.gov/states/fast-facts/economics-and-demog... :
> Coastal counties of the U.S. are home to 129 million people, or almost 40 percent of the nation's total population
Are we going to protect other states from this, too?
Servers and data centers could use 30% less energy with a simple Linux update
"Kernel vs. User-Level Networking: Don't Throw Out the Stack with the Interrupts" (2022) https://dl.acm.org/doi/abs/10.1145/3626780 :
> Abstract: This paper reviews the performance characteristics of network stack processing for communication-heavy server applications. Recent literature often describes kernel-bypass and user-level networking as a silver bullet to attain substantial performance improvements, but without providing a comprehensive understanding of how exactly these improvements come about. We identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead. While IRQs and their handling have a substantial impact on the effectiveness of the processor pipeline and thereby the overall processing efficiency, their overhead is difficult to measure directly when serving demanding workloads. This paper presents an indirect methodology to assess IRQ overhead by constructing preliminary approaches to reduce the impact of IRQs. While these approaches are not suitable for general deployment, their corresponding performance observations indirectly confirm the conjecture. Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput without compromising tail latency. In case of server applications, such as web servers or Memcached, the resulting performance is comparable to using kernel-bypass and user-level networking when using stacks with similar functionality and flexibility.
Concept cells help your brain abstract information and build memories
skos:Concept RDFS Class: https://www.w3.org/TR/skos-reference/#concepts
schema:Thing: https://schema.org/Thing
atomspace:ConceptNode: https://wiki.opencog.org/w/Atom_types .. https://github.com/opencog/atomspace#examples-documentation-...
SKOS Simple Knowledge Organization System > Concepts, ConceptScheme: https://en.wikipedia.org/wiki/Simple_Knowledge_Organization_...
But temporal instability observed in repeat functional imaging studies indicates that functional localization constant: the regions of the brain that activate for a given cue vary over time.
From https://news.ycombinator.com/item?id=42091934 :
> "Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
>> Future work should characterize drift across brain regions, cell types, and learning.
The important part about the statements in the drift paper are the qualifiers:
> Cells whose activity was previously correlated with environmental and behavioral variables are most frequently no longer active in response to the same variables weeks later. At the same time, a mostly new pool of neurons develops activity patterns correlated with these variables.
“Most frequently” and “mostly new” —- this means that some neurons still fire across the weeks-long periods for the same activities, leaving plenty of potential space for concept cells.
This doesn’t necessarily mean concept cells exist, but it does allow for the possibility of their existence.
I also didn’t check which regions of the brain were evaluated in each concept, as it is likely they have some different characteristics at the neuron level.
So there's more stability in the electrovolt wave function of the brain than in the cellular activation pathways?
I am not sure what an electrovolt is, or why this question follows from what I said? All I said was all neurons don’t seem to switch their activation stimuli, only “many.”
Show HN: Design/build of some parametric speaker cabinets with OpenSCAD
> As the design was fully parametric, I could change a single variable to move to a floorstanding design.
How far are we in 2025 from defining a Differentiable design, setting a target (e.g. maximally flat frequency response for a certain speaker placement in a certain room) and solving this automatically?
A basic speaker can't do a lot to improve "room acoustics" [1], particularly below the Schroeder frequency where room modes greatly affect the bass response. From my experience, it's the bass that needs fixing in room, because even a "perfect speaker" will get boomy/muddy at some frequencies (ie. reflections overlapping constructively), and thin/null at others (ie. reflections overlapping destructively). And if you aren't going to worry about "fixing the room", then there are already companies/products like Kali Audio IN-5 speakers [2] that have squeezed in some good performance and tech (eg. active DSP with coaxial driver) in to an affordable package.
There are ways to improve the bass situation, and one is by implementing "directional bass", aka a Cardioid array. This could be achieved with multiple individual speakers (add-on/upgrade existing system), or could be integrated in to 1 "speaker". Examples of the latter have been around for a few years - see Dutch & Dutch 8c or Kii Three. They are relatively expensive (which you could consider an early adopter tax), but affordable competition is starting to mature with speakers such as Mesanovic CDM65 [3].
There is another way to improve things too, and that is via "Active Room Treatment" [4] as Dirac calls it. Basically it uses excess capability in various speakers to "clean up" the audio of other speakers in the system by outputting "cancellation waves" (to cancel the problems). The results appear amazing, but they are taking their sweet time getting it released on to affordable equipment.
There's also "spatial audio" like Dolby Atmos that should/could work around room problems in a similar way to Dirac ART. So good speakers (like already exist) + ART + Atmos + AI "upscaled" 2 channel source music could be the final frontier? But that's just for "mechanical" sound reproduction. Maybe in the future I can just transmit the song straight in to my brain, bypassing my ears and the need for speakers entirely?!
[1] https://en.wikipedia.org/wiki/Room_acoustics [2] https://www.kaliaudio.com/independence [3] https://www.audiosciencereview.com/forum/index.php?threads/m... [4] https://www.dirac.com/live/dirac-live-active-room-treatment/
If you want to DIY something similar, check out the LXmin and other Linkwitz designs
In this video, they explain that the bandpass port for letting air out of typically the back of speakers is essential for acoustic transmission including bass; and they made an anechoic chamber.
"World's Second Best Speakers!" https://youtube.com/watch?v=EEh01PX-q9I&
TechIngedients also has a video about attaching $6 bass kickers to XPS foam to make flat panel speakers about as good as expensive bookshelf speakers.
"World’s Best Speakers!" https://youtube.com/watch?v=CKIye4RZ-5k&
Tool touted as 'first AI software engineer' is bad at its job, testers claim
Just blogspam rehash of https://www.answer.ai/posts/2025-01-08-devin.html#what-is-de...
[Multi-] SWE-bench Leaderboards:
SWE-bench: https://www.swebench.com/ :
> SWE-bench is a dataset that tests systems' ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python repositories. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
Multi-SWE-bench: A Multi-Lingual and Multi-Modal GitHub Issue Resolving Benchmark: https://multi-swe-bench.github.io/
Show HN: Stratoshark, a sibling application to Wireshark
Hi all, I'm excited to announce Stratoshark, a sibling application to Wireshark that lets you capture and analyze process activity (system calls) and log messages in the same way that Wireshark lets you capture and analyze network packets. If you would like to try it out you can download installers for Windows and macOS and source code for all platforms at https://stratoshark.org.
AMA: I'm the goofball whose name is at the top of the "About" box in both applications, and I'll be happy to answer any questions you might have.
Re: custom fields in pcap traces and retis https://github.com/retis-org/retis
MIT Unveils New Robot Insect, Paving the Way Toward Rise of Robotic Pollinators
Is there robo-beekeeping with or without humanoid robots?
/? Robo beekeeping: https://www.google.com/search?q=robo+beekeeping
Beekeeping: https://en.wikipedia.org/wiki/Beekeeping
FWIU bees like clover (and dandelions are their spring food source), which we typically kill with broadleaf herbicide for lawncare.
From https://news.ycombinator.com/item?id=38158625 :
> Is it possible to create a lawn weed killer (a broadleaf herbicide) that doesn't kill white dutch clover; because bees eat clover (and dandelions) and bees are essential?"
> [ Dandelion rubber is a sustainable alternative to microplastic tires ]
Pesticide toxicity to bees: https://en.wikipedia.org/wiki/Pesticide_toxicity_to_bees :
> Pesticides, especially neonicotinoids, have been investigated in relation to risks for bees such as Colony Collapse Disorder. A 2018 review by the European Food Safety Authority (EFSA) concluded that most uses of neonicotinoid pesticides such as clothianidin represent a risk to wild bees and honeybees. [5][6] Neonicotinoids have been banned for all outdoor use in the entire European Union since 2018, but has a conditional approval in the U.S. and other parts of the world, where it is widely used. [7][8]
TIL dish soap kills wasps, yellow jackets, hornets nearly on contact.
From https://savethebee.org/garden-weeds-bees-love/ :
> Many are beneficial, like dandelions, milkweed, clover, goldenrod and nettle, for bees and other pollinators.
Show HN: Pytest-evals – Simple LLM apps evaluation using pytest
The pytest-evals README mentions that it's built on pytest-harvest, which works with pytest-xdist and pytest-asyncio.
pytest-harvest: https://smarie.github.io/python-pytest-harvest/ :
> Store data created during your pytest tests execution, and retrieve it at the end of the session, e.g. for applicative benchmarking purposes
Yeah, pytest-harvest is a pretty cool plugin.
Originally I had a (very large and unfriendly) conftest file, but it was quite challenging to collaborate with other team members and was quite repetitive. So I wrapped it as a plugin, added some more functionalities and thats it.
This plugin wraps some boilerplate code in a way that is easy to use specially for the eval use-case. It’s minimalistic by design. Nothing big or fancy
Laser technique measures distances with nanometre precision
> optical frequency comb
"113 km absolute ranging with nanometer precision" (2024) https://arxiv.org/abs/2412.05542 :
> two-way dual-comb ranging (TWDCR) approach
> The advanced long-distance ranging technology is expected to have immediate implications for space research initiatives, such as the space telescope array and the satellite gravimetry
Note that the precision is very good, however the accuracy is nowhere near as close (fractions of a metre) in the atmosphere, due to the variable refractive index of air. Long term averages can help, of course.
They talk about this technique for ranging between satellites, which wouldn't have to deal with atmospheric conditions.
For ranging in an atmosphere they suggest dual-comb spectroscopy or two-color methods to account for the atmospheric changes.
From https://news.ycombinator.com/item?id=42330179 .. https://www.notebookcheck.net/University-of-Tokyo-researcher... :
> multispectral [UV + IR] camera ... optical, non-contact method to detect [blood pressure, hypertension, blood glucose levels, and diabetes]." [ for "Non-Contact Biometric System for Early Detection of Hypertension and Diabetes" https://www.ahajournals.org/doi/10.1161/circ.150.suppl_1.413... ]
>> "Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
>>> new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). [ and XGboost ]
Show HN: Terraform Provider for Inexpensive Switches
Hi HN,
I’ve been building this provider for (web managed) network switches manufactured by HRUI. These switches often used in SMBs, home labs, and by budget-conscious enthusiasts. Many HRUI switches are also rebranded and sold under various OEM/ODM names (eg. Horaco, XikeStor, keepLiNK, Sodola, etc) making them accessible/popular but often overlooked in the world of infrastructure automation.
The provider is in pre-release, and I’m looking for owners of these switches to test it and share feedback. My goal is to make it easier to automate its config using Terraform/OpenTofu :)
You can use this provider to configure VLANs, port settings, trunk/link aggregation etc.
I built this provider to address the lack of automation tools for budget-friendly hardware. It leverage goquery and has an internal SDK sitting between the Terraform resources and the switch Web UI.
If you have one of these switches, I’d love for you to give it a try and let me know how it works for you!
Terraform Registry: https://registry.terraform.io/providers/brennoo/hrui
OpenTofu Provider: https://search.opentofu.org/provider/brennoo/hrui
I’m happy to answer any questions about the provider or the hardware it supports. Feedback, bug reports, and ideas for improvement are more than welcome!Does any one know of switches similar to this but might be loadable with Linux? Maybe able to run with switchdev or similar?
OpenWRT > Table of Hardware > Switches: https://openwrt.org/toh/views/switches
ansible-openwrt: https://github.com/gekmihesg/ansible-openwrt
/? terraform OpenWRT: https://www.google.com/search?q=terraform+openwrt
/? terraform Open vSwitch: https://www.google.com/search?q=open+vswitch+terraform
Open vSwitch supports OpenFlow: https://en.wikipedia.org/wiki/Open_vSwitch
Open vSwitch > "Porting Open vSwitch to New Software or Hardware" https://docs.openvswitch.org/en/latest/topics/porting/
Optimizing Jupyter Notebooks for LLMs
Jupyter + LLM tools: Ipython-GPT, Elyra, jetmlgpt, jupyter-ai; CoCalc, Colab, NotebookLM,
jupyterlab/jupyter-ai: https://github.com/jupyterlab/jupyter-ai
"[jupyter/enhancement-proposals#128] Pre-proposal: standardize object representations for ai and a protocol to retrieve them" https://github.com/jupyter/enhancement-proposals/issues/128
Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?
How can new and existing firefighting aircraft, land craft, seacraft, and other capabilities help fight fire; in California at present and against the new climate enemy?
# Next-Gen Firefighting Aircraft (working title: NGFFA)
## Roadmap
- [ ] Brainstorm challenges and solutions; new sustainable approaches and time-tested methods
- [ ] Write a grant spec
- [ ] Write a legislative program funding request
## Challenges
- Challenge: Safety; Risk to personnel and civilians
- Challenge: Diving into fire is dangerous for the human aircraft operator
- Challenge: Vorticity due to fire heat; fire CFD (Computational Fluid Dynamics)
-- The verticals that updraft and downdraft off a fire are dangerous for pilots and expensive, non-compostable aircraft.
- Challenge: Water drifts away before it hits the ground due to the release altitude, wind, and fire air currents
- Challenge: Water shortage
- Challenge: Water lift aircraft shortage
- Task: Pour thousands of gallons (or Canadian litres) of water onto fire
- Task: Pour a line, a circle, or other patterns to contain fire
- Task: Process seawater quickly enough to avoid property damage with untreated ocean salt water, for example
https://www.google.com/search?q=fluid+vorticity+fire+plane
## Solutions
- Are Quadcopters (or n-copters) more stable than helicopters or planes in high-wind fire conditions?
- Fire CFD modeling for craft design.
- Light, durable aerospace-grade hydrogen vessels
- Intermodally transportable containers
- Are there stackable container-sized water vessels for freight transport?
- Floating, towable seawater processing and loading capability.
-- Process seawater for: domestic firefighting, disaster relief,
-- Fill vessels at sea.
- "Starlite" as a flame retardant; https://en.wikipedia.org/wiki/Starlite
- starlite youtuber #Replication FWIU: [ cornstarch, flour, sugar, borax ] https://en.wikipedia.org/wiki/Starlite#Replication
- Notes re: "xPrize Wildfire – $11M Prize Competition" (2023) https://news.ycombinator.com/item?id=35658214
- Hydrogels, Aerogels
- (Hemp) Aerogels absorb oil better than treated polyurethane foam and hair.
- EV fire blanket, industry cartridge spec
- Non-flammable batteries; unpressurized Sodium Ion, Proton batteries, Twisted Carbon Nanotube batteries
- Compostable batteries
- Cloud seeding and firefighting
## Reference material
- Aerial firefighting: https://en.wikipedia.org/wiki/Aerial_firefighting
- Aerial firefighting > Comparison table of fixed-wing, firefighting tanker airplanes: https://en.wikipedia.org/wiki/Aerial_firefighting#Comparison...
Imaging Group and Phase Velocities of THz Surface Plasmon Polaritons in Graphene
ScholarlyArticle: "Spacetime Imaging of Group and Phase Velocities of Terahertz Surface Plasmon Polaritons in Graphene" (2024) https://pubs.acs.org/doi/10.1021/acs.nanolett.4c04615
NewsArticle: "Scientists observe and control ultrafast surface waves on graphene" (2024) https://phys.org/news/2025-01-scientists-ultrafast-surface-g...
Reversible computing escapes the lab
Nice, these ideas have been around for a long time but never commercialized to my knowledge. I've done some experiments in this area with simulations and am currently designing some test circuitry to be fabbed via Tiny Tapeout.
Reversibility isn't actually necessary for most of the energy savings. It saves you an extra maybe 20% beyond what adiabatic techniques can do on their own. Reason being, the energy of the information itself pales in comparison to the resistive losses which dominate the losses in adiabatic circuits, and it's actually a (device-dependent) portion of these resistive losses which the reversible aspect helps to recover, not the energy of information itself.
I'm curious why Frank chose to go with a resonance-based power-clock, instead of a switched-capacitor design. In my experience the latter are nearly as efficient (losses are still dominated by resistive losses in the powered circuit itself), and are more flexible as they don't need to be tuned to the resonance of the device. (Not to mention they don't need an inductor.) My guess would be that, despite requiring an on-die inductor, the overall chip area required is much less than that of a switched-capacitor design. (You only need one circuit's worth of capacitance, vs. 3 or more for a switched design, which quadruples your die size....)
I'm actually somewhat skeptical of the 4000x claim though. Adiabatic circuits can typically only provide about a single order of magnitude power savings over traditional CMOS -- they still have resistive losses, they just follow a slightly different equation (f²RC²V², vs. fCV²). But RC and C are figures of merit for a given silicon process, and fRC (a dimensionless figure) is constrained by the operational principles of digital logic to the order of 0.1, which in turn constrains the power savings to that order of magnitude regardless of process. Where you can find excess savings though is simply by reducing operating frequency. Adiabatic circuits benefit more from this than traditional CMOS. Which is great if you're building something like a GPU which can trade clock frequency for core count.
Hi, someone pointed me at your comment, so I thought I'd reply.
First, the circuit techniques that aren't reversible aren't truly, fully adiabatic either -- they're only quasi-adiabatic. In fact, if you strictly follow the switching rules required for fully adiabatic operation, then (ignoring leakage) you cannot erase information -- none of the allowed operations achieve that.
Second, to say reversible operation "only saves an extra 20%" over quasi-adiabatic techniques is misleading. Suppose a given quasi-adiabatic technique saves 79% of the energy, and a fully adiabatic, reversible version saves you "an extra 20%" -- well, then now that's 99%. But, if you're dissipating 1% of the energy of a conventional circuit, and the quasi-adiabatic technique is dissipating 21%, that's 21x more energy efficient! And so you can achieve 21x greater performance within a given power budget.
Next, to say "resistive losses dominate the losses" is also misleading. The resistive losses scale down arbitrarily as the transition time is increased. We can actually operate adiabatic circuits all the way down to the regime where resistive losses are about as low as the losses due to leakage. The max energy savings factor is on the order of the square root of the on/off ratio of the devices.
Regarding "adiabatic circuits can typically only provide an order of magnitude power savings" -- this isn't true for reversible CMOS! Also, "power" is not even the right number to look at -- you want to look at power per unit performance, or in other words energy per operation. Reducing operating frequency reduces the power of conventional CMOS, but does not directly reduce energy per operation or improve energy efficiency. (It can allow you to indirectly reduce it though, by using a lower switching voltage.)
You are correct that adiabatic circuits can benefit from frequency scaling more than traditional CMOS -- since lowering the frequency actually directly lowers energy dissipation per operation in adiabatic circuits. The specific 4000x number (which includes some benefits from scaling) comes from the analysis outlined in this talk -- see links below - but we have also confirmed energy savings of about this magnitude in detailed (Cadence/Spectre) simulations of test circuits in various processes. Of course, in practice the energy savings is limited by the resonator Q value. And a switched-capacitor design (like a stepped voltage supply) would do much worse, due to the energy required to control the switches.
https://www.sandia.gov/app/uploads/sites/210/2023/11/Comet23... https://www.youtube.com/watch?v=vALCJJs9Dtw
Happy to answer any questions.
Thanks for the reply, was actually hoping you'd pop over here.
I don't think we actually disagree on anything. Yes, without reverse circuits you are limited to quasi-adiabatic operaton. But, at least in the architectures I'm familiar with (mainly PFAL), most of the losses are unarguably resistive. As I understand PFAL, it's only when the operating voltage of a given gate drops below Vth that the (macro) information gets lost and reversibility provides benefit, which is only a fraction of the switching cycle. At least for PFAL the figure is somewhere in the 20% range IIRC. (I say "macro" because of course the true energy of information is much smaller than the amounts we're talking about.)
The "20%" in my comment I meant in the multiplicative sense, not additive. I.e. going from 79% savings to 83.2%, not 99%. (I realize that wasn't clear.)
What I find interesting is reversibility isn't actually necessary for true adiabatic operation. All that matters is the information of where charge needs to be recovered from can be derived somehow. This could come from information available elsewhere in the circuit, not necessarily the subsequent computations reversed. (Thankfully, quantum non-duplication does not apply here!)
I agree that energy per operation is often more meaningful, BUT one must not lose sight of the lower bounds on clock speed imposed by a particular workload.
Ah thanks for the insight into the resonator/switched-cap tradeoff. Yes, capacitative switching designs which are themselves adiabatic I know is a bit of a research topic. In my experience the losses aren't comparable to the resistive losses of the adiabatic circuitry itself though. (I've done SPICE simulations using the sky130 process.)
It's been a while since I looked at it, but I believe PFAL is one of the not-fully-adiabatic techniques that I have a lot of critiques of.
There have been studies showing that a truly, fully adiabatic technique in the sense I'm talking about (2LAL was the one they checked) does about 10x better than any of the other "adiabatic" techniques. In particular, 2LAL does a lot better than PFAL.
> reversibility isn't actually necessary
That isn't true in the sense of "reversible" that I use. Look at the structure of the word -- reverse-able. Able to be reversed. It isn't essential that the very same computation that computed some given data is actually applied in reverse, only that no information is obliviously discarded, implying that the computation always could be reversed. Unwanted information still needs to be decomputed, but in general, it's quite possible to de-compute garbage data using a different process than the reverse of the process that computed it. In fact, this is frequently done in practice in typical pipelined reversible logic styles. But they still count as reversible even though the forwards and reverse computations aren't identical. So, I think we agree here and it's just a question of terminology.
Lower bounds on clock speed are indeed important; generally this arises in the form of maximum latency constraints. Fortunately, many workloads today (such as AI) are limited more by bandwidth/throughput than by latency.
I'd be interested to know if you can get energy savings factors on the order of 100x or 1000x with the capacitive switching techniques you're looking at. So far, I haven't seen that that's possible. Of course, we have a long way to go to prove out those kinds of numbers in practice using resonant charge transfer as well. Cheers...
PFAL has both a fully adiabatic and quasi-adiabatic configuration. (Essentially, the "reverse" half of a PFAL gate can just be tied to the outputs for quasi-adiabatic mode.) I've focused my own research on PFAL because it is (to my knowledge) one of the few fully adiabatic families, and of those, I found it easy to understand.
I'll have to check out 2LAL. I haven't heard of it before.
No, even with a fully adiabatic switched-capacitance driver I don't think those figures are possible. The maximum efficiency I believe is 1-1/n, n being the number of steps (and requiring n-1 capacitors). But the capacitors themselves must each be an order of magnitude larger than the adiabatic circuit itself. So it's a reasonable performance match for an adiabatic circuit running at "max" frequency, with e.g. 8 steps/7 capacitors, but 100x power reduction necessary to match a "slowed" adiabatic circuit would require 99 capacitors... which quickly becomes infeasible!
Yeah, 2LAL (and its successor S2LAL) uses a very strict switching discipline to achieve truly, fully adiabatic switching. I haven't studied PFAL carefully but I doubt it's as good as 2LAL even in its more-adiabatic version.
For a relatively up-to-date tutorial on what we believe is the "right" way to do adiabatic logic (i.e., capable of far more efficiency than competing adiabatic logic families from other research groups), see the below talk which I gave at UTK in 2021. We really do find in our simulations that we can achieve 4 or more orders of magnitude of energy savings in our logic compared to conventional, given ideal waveforms and power-clock delivery. (But of course, the whole challenge in actually getting close to that in practice is doing the resonant energy recovery efficiently enough.)
https://www.sandia.gov/app/uploads/sites/210/2022/06/UKy-tal... https://tinyurl.com/Frank-UKy-2021
The simulation results were first presented (in an invited talk to the SRC Decadal Plan committee) a little later that year in this talk (no video of that one, unfortunately):
https://www.sandia.gov/app/uploads/sites/210/2022/06/SRC-tal...
However, the ComET talk I linked earlier in the thread does review that result also, and has video.
How do the efficiency gains compare to speedups from photonic computing, superconductive computing, and maybe fractional Quantum Hall effect at room temperature computing? Given rough or stated production timelines, for how long will investments in reversible computing justify the relative returns?
Also, FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
"The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123 ... https://www.sciencedaily.com/releases/2011/06/110601134300.h... ;
> Abstract: ... Here we show that the standard formulation and implications of Landauer’s principle are no longer valid in the presence of quantum information. Our main result is that the work cost of erasure is determined by the entropy of the system, conditioned on the quantum information an observer has about it. In other words, the more an observer knows about the system, the less it costs to erase it. This result gives a direct thermodynamic significance to conditional entropies, originally introduced in information theory. Furthermore, it provides new bounds on the heat generation of computations: because conditional entropies can become negative in the quantum case, an observer who is strongly correlated with a system may gain work while erasing it, thereby cooling the environment.
I have concerns about density & cost for both photonic & superconductive computing. Not sure what one can do with quantum Hall effect.
Regarding long-term returns, my view is that reversible computing is really the only way forward for continuing to radically improve the energy efficiency of digital compute, whereas conventional (non-reversible) digital tech will plateau within about a decade. Because of this, within two decades, nearly all digital compute will need to be reversible.
Regarding bypassing the Landauer limit, theoretically yes, reversible computing can do this, but not by thermally cooling anything really, but rather by avoiding the conversion of known bits to entropy (and their energy to heat) in the first place. This must be done by "decomputing" the known bits, which is a fundamentally different process from just erasing them obliviously (without reference to the known value).
For the quantum case, I haven't closely studied the result in the second paper you cited, but it sounds possible.
/? How can fractional quantum hall effect be used for quantum computing https://www.google.com/search?q=How+can+a+fractional+quantum...
> Non-Abelian Anyons, Majorana Fermions are their own anti particles, Topologically protected entanglement
> In some FQHE states, quasiparticles exhibit non-Abelian statistics, meaning that the order in which they are braided affects the final quantum state. This property can be used to perform universal quantum computation
Anyon > Abelian, Non Abelian Anyons, Toffoli (CCNOT gate) https://en.wikipedia.org/wiki/Anyon#Abelian_anyons
Hopefully there's a classical analogue of a quantum delete operation that cools the computer.
There's no resistance for electrons in superconductors, so there's far less waste heat. But - other than recent advances with rhombohedral trilayer graphene and pentalayer graphene (which isn't really "graphene") - superconductivity requires super-chilling which is too expensive and inefficient.
Photons are not subject to the Landauer limit and are faster than electrons.
In the Standard Model of particle physics, Photons are Bosons, and Electrons are Leptons are Fermions.
Electrons behave like fluids in superconductors.
Photons behave like fluids in superfluids (Bose-Einstein condensates) which are more common in space.
And now they're saying there's a particle that only has mass when moving in certain directions; a semi-Dirac fermion: https://en.wikipedia.org/wiki/Semi-Dirac_fermion
> Because of this, within two decades, nearly all digital compute will need to be reversible.
Reversible computing: https://en.wikipedia.org/wiki/Reversible_computing
Reverse computation: https://en.wikipedia.org/wiki/Reverse_computation
Time crystals demonstrate retrocausality.
Is Hawking radiation from a black hole or from all things reversible?
What are the possible efficiency gains?
Customasm – An assembler for custom, user-defined instruction sets
Is there an ISA for WASM that's faster than RISC-V 64, which is currently 3x faster than x86_64 on x86_64 FWICS? https://github.com/ktock/container2wasm#emscripten-on-browse... demo: https://ktock.github.io/container2wasm-demo/
Show HN: WASM-powered codespaces for Python notebooks on GitHub
Hi HN!
Last year, we shared marimo [1], an open-source reactive notebook for Python with support for execution through WebAssembly [2].
We wanted to share something new: you can now run marimo and Jupyter notebooks directly from GitHub in a Wasm-powered, codespace-like environment. What makes this powerful is that we mount the GitHub repository's contents as a filesystem in the notebook, making it really easy to share notebooks with data.
All you need to do is prepend 'marimo.app' to any Python notebook on GitHub. Some examples:
- Jupyter Notebook: https://marimo.app/github.com/jakevdp/PythonDataScienceHandb...
- marimo notebook: https://marimo.app/github.com/marimo-team/marimo/blob/07e8d1...
Jupyter notebooks are automatically converted into marimo notebooks using basic static analysis and source code transformations. Our conversion logic assumes the notebook was meant to be run top-down, which is usually but not always true [3]. It can convert many notebooks, but there are still some edge cases.
We implemented the filesystem mount using our own FUSE-like adapter that links the GitHub repository’s contents to the Python filesystem, leveraging Emscripten’s filesystem API. The file tree is loaded on startup to avoid waterfall requests when reading many directories deep, but loading the file contents is lazy. For example, when you write Python that looks like
```python
with open("./data/cars.csv") as f: print(f.read())
# or
import pandas as pd pd.read_csv("./data/cars.csv")
```
behind the scenes, you make a request [4] to https://raw.githubusercontent.com/<org>/<repo>/main/data/car....
Docs: https://docs.marimo.io/guides/publishing/playground/#open-no...
[1] https://github.com/marimo-team/marimo
[2] https://news.ycombinator.com/item?id=39552882
[3] https://blog.jetbrains.com/datalore/2020/12/17/we-downloaded...
[4] We technically proxy it through the playground https://marimo.app to fix CORS issues and GitHub rate-limiting.
> CORS and GitHub
The Godot docs mention coi-serviceworker; https://github.com/orgs/community/discussions/13309 :
gzuidhof/coi-serviceworker: https://github.com/gzuidhof/coi-serviceworker :
> Cross-origin isolation (COOP and COEP) through a service worker for situations in which you can't control the headers (e.g. GH pages)
CF Pages' free unlimited bandwidth and gitops-style deploy might solve for apps that require more than the 100GB software cap of free bandwidth GH has for open source projects.
Thanks for sharing these resources
> [ FUSE to GitHub FS ]
> Notebooks created from GitHub links have the entire contents of the repository mounted into the notebook's filesystem. This lets you work with files using regular Python file I/O!
Could BusyBox sh compiled to WASM (maybe on emscripten-forge) work with files on this same filesystem?
"Opening a GitHub remote with vscode.dev requires GitHub login? #237371" ... but it works with Marimo and JupyterLite: https://github.com/microsoft/vscode/issues/237371
Does Marimo support local file system access?
jupyterlab-filesystem-access only works with Chrome?: https://github.com/jupyterlab-contrib/jupyterlab-filesystem-...
vscode-marimo: https://github.com/marimo-team/vscode-marimo
"Normalize and make Content frontends and backends extensible #315" https://github.com/jupyterlite/jupyterlite/issues/315
"ENH: Pluggable Cloud Storage provider API; git, jupyter/rtc" https://github.com/jupyterlite/jupyterlite/issues/464
Jupyterlite has read only access to GitHub repos without login, but vscode.dev does not.
Anyways, nbreproduce wraps repo2docker and there's also a repo2jupyterlite.
nbreproduce builds a container to run an .ipynb with: https://github.com/econ-ark/nbreproduce
container2wasm wraps vscode-container-wasm: https://github.com/ktock/vscode-container-wasm
container2wasm: https://github.com/ktock/container2wasm
Scientists Discover Underground Water Vault in Oregon 3x the Size of Lake Mead
Would have been better to not admit it was there or have hidden it.
The last things we need are more clean water reserves sucked dry by Intel rather than it focusing on guaranteeing its own supplies through other means, or used as excuses for Southern California not to continue its (admittedly in some counties admirable) efforts to get water resource management under control in concert with other states in the watersheds.
Resource curse.
TIL semiconductor manufacturing uses a lot of water, relative to other production processes? And energy, for photolithography.
(Edit: nanoimprint lithography may have significantly lower resource requirements than traditional lithography? https://arstechnica.com/reviews/2024/01/canon-plans-to-disru... : "will be “one digit” cheaper and use up to 90 percent less power" )
Datacenters too;
"Next-generation datacenters consume zero water for cooling" (2024) https://news.ycombinator.com/item?id=42376406
Most datacenters have no way to return their boiled, sterilized water for water treatment, and so they don't give or sell datacenter waste water back, it takes heat with it when it is evaporated.
From https://news.ycombinator.com/item?id=42454547#42460317 :
> FWIU, datacenters are unable to sell their waste heat, boiled sterilized steam and water, unused diesel, and potentially excess energy storage.
"Ask HN: How to reuse waste heat and water from AI datacenters?" (2024) https://news.ycombinator.com/item?id=40820952
How did it form?
How does it affect tectonics on the western coast of the US?
Cascade Range > Geology: https://en.wikipedia.org/wiki/Cascade_Range#Geology
https://www.sci.news/othersciences/geophysics/hikurangi-wate... :
> Revealed by 3D seismic imaging, the newly-discovered [Hikirangi] water reservoir lies 3.2 km (2 miles) under the ocean floor off the coast of New Zealand, where it may be dampening a major earthquake fault that faces the country’s North Island. The fault is known for producing slow-motion earthquakes, called slow slip events. These can release pent-up tectonic pressure harmlessly over days and weeks
The "Slow earthquake" wikipedia article mentions the northern Cascades as a research area of interest: https://en.wikipedia.org/wiki/Slow_earthquake
They say water has a fingerprint; a hydrochemical and/or a geochemical footprint?
Is the water in the reservoir subducted from the surface or is it oozing out of the Earth?
"Dehydration melting at the top of the lower mantle" (2014) https://www.science.org/doi/abs/10.1126/science.1253358 :
> Summary: [...] Schmandt et al. combined seismological observations beneath North America with geodynamical modeling and high-pressure and -temperature melting experiments. They conclude that the mantle transition zone — 410 to 660 km below Earth's surface — acts as a large reservoir of water.
Physicists who want to ditch dark energy
From https://news.ycombinator.com/item?id=36222625#36265001 :
> Further notes regarding Superfluid Quantum Gravity (instead of dark energy)
A 'warrior' brain surgeon saved his Malibu street from wildfires and looters
> training, N95 masks, sourced fire hoses
> sprinklers in the roof, cement tiles instead of wood
> Dr Griffiths, who is also a doctor to the LA Kings hockey team, said if one thing can come from the devastating tragedy, he wants people to get to know their neighbours.
From "Ask HN: Next Gen, Slow, Heavy Lift Firefighting Aircraft Specs?" https://news.ycombinator.com/item?id=42665860 :
> Next-Gen Firefighting Aircraft (working title: NGFFA)
Refactoring with Codemods to Automate API Changes
From "Show HN: Codemodder – A new codemod library for Java and Python" (2024) https://news.ycombinator.com/item?id=39111747 :
> [ codemodder-python, libCST, MOSES and Holman's elegant normal form, singnet/asmoses, Formal Verification, ]
How do photons mediate both attraction and repulsion?
Additional recent findings in regards to photons (and attraction and repulsion) from the past few years:
"Scientists discover laser light can cast a shadow" https://news.ycombinator.com/item?id=42231644 :
- "Shadow of a laser beam" (2024) https://opg.optica.org/optica/fulltext.cfm?uri=optica-11-11-...
- "Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315
- "Ultrafast opto-magnetic effects in the extreme ultraviolet spectral range" (2024) https://www.nature.com/articles/s42005-024-01686-7 .. https://news.ycombinator.com/item?id=40911861
- 2024: better than the amplituhedron (for avoiding some Feynmann diagrams) .. "Physicists Reveal a Quantum Geometry That Exists Outside of Space and Time" (2024) https://www.quantamagazine.org/physicists-reveal-a-quantum-g... :
- "All Loop Scattering As A Counting Problem" (2023) https://arxiv.org/abs/2309.15913
- "All Loop Scattering For All Multiplicity" (2023) https://arxiv.org/abs/2311.09284
And then gravity and photons:
- "Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) https://journals.aps.org/pra/abstract/10.1103/PhysRevA.108.0...
- "Photonic implementation of quantum gravity simulator" (2024) https://www.spiedigitallibrary.org/journals/advanced-photoni... .. https://news.ycombinator.com/item?id=42506463
- "Graviton to Photon Conversion via Parametric Resonance" (2023) https://arxiv.org/abs/2205.08767 .. "Physicists discover that gravity can create light" (2023) https://news.ycombinator.com/item?id=35633291#35674794
What about the reverse; of gravitons produce photons, can photons create gravity?
- "All-optical complex field imaging using diffractive processors" (2024) https://www.nature.com/articles/s41377-024-01482-6 .. "New imager acquires amplitude and phase information without digital processing" (2024) https://www.google.com/amp/s/phys.org/news/2024-05-imager-am...
- "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
- "Experimental evidence that a photon can spend a negative amount of time in an atom cloud" (2024) https://www.nature.com/articles/35018520 .. https://www.impactlab.com/2024/10/13/photons-defy-time-new-q... ; Rubidium
- "Gain-assisted superluminal light propagation" (2000) https://www.nature.com/articles/35018520 ; Cesium
- "Exact Quantum Electrodynamics of Radiative Photonic Environments" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... .. "New theory reveals the shape of a single photon"
- "Physicists Magnetize a Material with Light" (2025) https://news.ycombinator.com/item?id=42628841
Ask HN: What are the biggest PITAs about managing VMs and containers?
I’ve been asked to write a blog post about “The PITA of managing containers and VMs.”
It's meant to be a rant listicle (with explanations as appropriate). What should I be sure to include?
One of the goals of containers are to unify the development and deployment environments. I hate developing and testing code in containers, so I develop and test code outside them and then package and test it again in a container.
Containerized apps need a lot of special boilerplate to determine how much CPU and memory they are allowed to use. It’s a lot easier to control resource limits with virtual machines because the application in the system resources are all dedicated to the application.
Orchestration of multiple containers for dev environments is just short of feature complete. With Compose, it’s hard to bring down specific services and their dependencies so you can then rebuild and rerun. I end up writing Ansible playbooks to start and stop components that are designed to be executed in particular sequences. Ansible makes it hard to detach a container, wait a specified time, and see if it’s running. Compose just needs to be updated to support management of shutting down and restarting containers, so I can move away from Ansible.
Services like Kafka that query the host name and broadcast it are difficult to containerize since the host name inside the container doesn’t match the external host name. Requires manual overrides which are hard to specify at run time because the orchestrators don’t make it easy to pass in the host name to the container. (This is more of a Kafka issue, though.)
Systemd, k8s, Helm, and Terraform model service dependencies.
Quadlet is the podman recommended way to do podman with systemd instead of k8s.
Podman supports kubes of containers and pods of containers;
man podman-container
man podman-generate-kube
man podman-kube
man podman-pod
`podman generate kube` generates YAML for `podman kube play` and for k8s `kubectl`.Podman Desktop can create a local k8s (kubernetes) cluster with any of kind, minikube, or openshift local. k3d and rancher also support creating one-node k8s clusters with minimal RAM requirements for cluster services.
kubectl is the utility for interacting with k8s clusters.
k8s Ingress API configures DNS and Load Balancing (and SSL certs) for the configured pods of containers.
E.g. Traefik and Caddy can also configure the load balancer web server(s) and request or generate certs given access to a docker socket to read the labels on the running containers to determine which DNS domains point to which containers.
Container labels can be specified in the Dockerfile/Containerfile, and/or a docker-compose.yml/compose.yml, and/or in k8s yaml.
Compose supports specifying a number of servers; `docker compose up web=3`.
Terraform makes consistent.
Compose does not support rolling or red/green deployment strategies. Does compose support HA high-availability deployments? If not, justify investing in a compose yaml based setup instead of k8s yaml.
Quadlet is the way to do podman containers without k8s; with just systemd for now.
Thanks! I’ll take a look at quadlet.
I find that I tend to package one-off tasks as containers as well. For example, create database tables and users. Compose supports these sort of things. Ansible actually makes it easy to use and block on container tasks that you don’t detach.
I’m not interested in running kubernetes, even locally.
Podman kube has support for k8s Jobs now: https://github.com/containers/podman/pull/23722
k8s docs > concepts > workloads > controllers > Jobs: https://kubernetes.io/docs/concepts/workloads/controllers/jo...
Ingress, Deployment, StatefulSets,: https://news.ycombinator.com/item?id=37763931
Northeastern's curriculum changes abandon fundamentals of computer science
Racket: https://learnxinyminutes.com/racket/
Python: https://learnxinyminutes.com/python/
pyret?
pyret: https://pyret.org/pyret-code/ :
> Why not just use Java, Python, Racket, OCaml, or Haskell?
IMHO fun educational learning languages aren't general purpose or production ready; and so also at this point in my career I would appreciate a more reusable language in a CS curriculum.
Python isn't CS pure like [favorite lisp], but it is a language coworkers will understand, it supports functional and object-oriented paradigms, and the pydata tools enable CS applications in STEM.
A lot of AI and ML code is written in Python, with C/Rust/Go.
There's an AIMA Python, but there's not a pyret or a racket AIMA or SICP, for example.
"Why MIT Switched from Scheme to Python (2009)" https://news.ycombinator.com/item?id=14167453
Computational thinking > Characteristics: https://en.wikipedia.org/wiki/Computational_thinking#Charact...
"Ask HN: Which school produces the best programmers or software engineers?" https://news.ycombinator.com/item?id=37581843
overleaf/learn/Algorithms re: nonexecutable LaTeX ways to specify algorithms: https://www.overleaf.com/learn/latex/Algorithms
Book: "Classic Computer Science Algorithms in Python"
coding-problems: https://github.com/MTrajK/coding-problems
coding-interview-university: https://github.com/jwasham/coding-interview-university
coding-interview-university lists "Computational complexity" but not "Computational thinking", which possibly isn't that different from WBS (work breakdown structure), problem decomposition and resynthesis, and the Scientific Method
Ask HN: How can I learn to better command people's attention when speaking?
I've noticed over the years that whenever I'm in group conversations in a social setting, people in general don't pay too much attention to what I say. For example, let's say the group is talking about travel and someone says something I find relatable e.g. someone mentions a place I've been to and really liked. When I try to contribute to the conversation, people just don't seem interested, and typically the conversation moves on as if I hadn't said anything. If I try to speak for a longer time (continuing with the travel example, let's say I try to talk about a particular attraction I enjoyed visiting at that location), I'm usually interrupted, and the focus shifts to whoever interrupted me.
This has happened (and still happens often) a lot, in different social circles, with people of diverse backgrounds. So, I figure it's not that I hang out with rude people, the problem must be me. I think the saddest part of all this is that even my wife's attention drifts off most of the time I try to talk to her.
I know it's not a language barrier issue, and I know for sure I enunciate my words well. I wonder though if the issue may be that I have a weak voice, or just an overall weak presence/body language. How can that be improved, if that's the case?
Could building rapport help?
Book: "Power Talk: Using Language to Build Authority and Influence" (2001) https://g.co/kgs/6L8MxNy
- Speaking from the edge, Speaking from the center
"From Comfort Zone to Performance Management" (2009) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%E2... .. https://news.ycombinator.com/item?id=32786594 :
> The ScholarlyArticle also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages
Show HN: TabPFN v2 – A SOTA foundation model for small tabular data
I am excited to announce the release of TabPFN v2, a tabular foundation model that delivers state-of-the-art predictions on small datasets in just 2.8 seconds for classification and 4.8 seconds for regression compared to strong baselines tuned for 4 hours. Published in Nature, this model outperforms traditional methods on datasets with up to 10,000 samples and 500 features.
The model is available under an open license: a derivative of the Apache 2 license with a single modification, adding an enhanced attribution requirement inspired by the Llama 3 license: https://github.com/PriorLabs/tabpfn. You can also try it via API: https://github.com/PriorLabs/tabpfn-client
TabPFN v2 is trained on 130 million synthetic tabular prediction datasets to perform in-context learning and output a predictive distribution for the test data points. Each dataset acts as one meta-datapoint to train the TabPFN weights with SGD. As a foundation model, TabPFN allows for fine-tuning, density estimation and data generation.
Compared to TabPFN v1, v2 now natively supports categorical features and missing values. TabPFN v2 performs just as well on datasets with or without these. It also handles outliers and uninformative features naturally, problems that often throw off standard neural nets.
TabPFN v2 performs as well with half the data as the next best baseline (CatBoost) with all the data.
We also compared TabPFN to the SOTA AutoML system AutoGluon 1.0. Standard TabPFN already outperforms AutoGluon on classification and ties on regression, but ensembling multiple TabPFNs in TabPFN v2 (PHE) is even better.
There are some limitations: TabPFN v2 is very fast to train and does not require hyperparameter tuning, but inference is slow. The model is also only designed for datasets up to 10k data points and 500 features. While it may perform well on larger datasets, it hasn't been our focus.
We're actively working on removing these limitations and intend to release new versions of TabPFN that can handle larger datasets, have faster inference and perform in additional predictive settings such as time-series and recommender systems.
We would love for you to try out TabPFN v2 and give us your feedback!
anyone tried this? is this actually overall better than xgboost/catboost?
Benchmark of tabpfn<2 compared to xgboost, lightgbm, and catboost: https://x.com/FrankRHutter/status/1583410845307977733 .. https://news.ycombinator.com/item?id=33486914
A video tour of the Standard Model (2021)
Standard Model: https://en.wikipedia.org/wiki/Standard_Model
Mathematical formulation of the Standard Model: https://en.wikipedia.org/wiki/Mathematical_formulation_of_th...
Physics beyond the Standard Model: https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Mo...
Story of the LaTeX representation of the standard model, from a comment re: "The deconstructed Standard Model equation" (2016): https://news.ycombinator.com/item?id=41753471#41772385
Can manim work with sympy expressions?
A standard model demo video could vary each (or a few) highlighted variables and visualize the [geometric,] impact/sensitivity of each
Excitons in the fractional quantum Hall effect
"Excitons in the Fractional Quantum Hall Effect" (2024) https://arxiv.org/abs/2407.18224
Fractional quantum Hall effect in rhombohedral, pentalayer graphene: https://news.ycombinator.com/item?id=42314581
Fractional quantum Hall effect: https://en.wikipedia.org/wiki/Fractional_quantum_Hall_effect
Superconducting nanostrip single photon detectors made of aluminum thin-films
"Superconducting nanostrip single photon detectors fabricated of aluminum thin-films" (2024) https://www.sciencedirect.com/science/article/pii/S277283072...
"Fabricating single-photon detectors from superconducting aluminum nanostrips" (2025) https://phys.org/news/2025-01-fabricating-photon-detectors-s...
Combining graphene and nanodiamonds for better microplasma devices
"High stability plasma illumination from micro discharges with nanodiamond decorated laser induced graphene electrodes" (2024) https://www.sciencedirect.com/science/article/pii/S277282852...
Physicists Magnetize a Material with Light
"Researchers discover new material for optically-controlled magnetic memory" (2024) https://phys.org/news/2024-08-material-optically-magnetic-me... ..
"Distinguishing surface and bulk electromagnetism via their dynamics in an intrinsic magnetic topological insulator" (2024) https://www.science.org/doi/10.1126/sciadv.adn5696
> MnBi2Te4
ScholarlyArticle: "Terahertz field-induced metastable magnetization near criticality in FePS3" (2024) https://www.nature.com/articles/s41586-024-08226-x
"Room temperature chirality switching and detection in a helimagnetic MnAu2 thin film" (2024) https://www.nature.com/articles/s41467-024-46326-4 .. https://scitechdaily.com/memory-breakthrough-helical-magnets... .. https://news.ycombinator.com/item?id=41921153
Can we create matter from light?
Indeed we can:
https://www.energy.gov/science/np/articles/making-matter-col...
Scientists find strong evidence for the long-predicted Breit-Wheeler effect—generating matter and antimatter from collisions of real photons.
Breit–Wheeler process > Experimental observations: https://en.wikipedia.org/wiki/Breit%E2%80%93Wheeler_process#...
Scientists find 'spooky' quantum entanglement within individual protons
Tell HN: ChatGPT can't show you a 5.25" floppy disk
Challenge: Come up with a query that makes it draw something resembling a 5.25" floppy, that is, without the metal shield and hub that is present on the 3.5" disks.
PostgreSQL Support for Certificate Transparency Logs Now Available
Are there Merkle hashes between the rows in the PostgreSQL CT store like there are in the Trillian CT store?
Sigstore Rekor also has centralized Merkle hashes.
Isn’t Rekor runs on top of Trillian?
German power prices turn negative amid expansion in renewables
Given the intraday prices, are there sufficient incentives to stimulate creation of energy storage businesses to sell the excess electricity back a couple hours or days later?
Or have the wasteful idiot cryptoasset miners been regulated out of existence such that there is no longer a buyer of last resort for excess rationally-subsidized clean energy?
Are the Duck Curve and Alligator curve problems the same in EU and other energy markets with and without intraday electricity prices?
Duck curve: Solar peaks around noon, but energy usage peaks around the evening commute and dinner; https://en.wikipedia.org/wiki/Duck_curve
Alligator curve: Wind peaks in the middle of the night;
https://fresh-energy.org/renewable-integration-in-the-midwes... :
> So, if we’re not duck-like, what is the Upper Midwest’s energy mascot? We give you: the Smilin’ Gator Curve. Unlike California’s ominous, faceless duck, Sally Gator welcomes the Midwest’s commitment to renewable generation!
(An excellent drawing of a cartoon municipally-bonded incumbent energy utility mascot!)
Can cryptoasset or data mining firms scale to increase demand for electricity when energy prices low or below zero? What are there relocation costs?
Is such a low subsidized energy price lucrative to energy storage, not-yet-PQ Proof-of-Work, and other data mining firms?
Can't GPE gravitational potential energy in old but secured mine shafts scale to meet energy storage requirements?
As far as I’m aware, at least what has been explained to me, storage is a smaller problem. The bigger problem is how do you transfer that energy from where it is stored to where it is needed. Wind farms in the sea help nothing in Bayern.
So, per your understanding, grid connectivity for energy production projects is a bigger issue than energy storage in their market?
Price falls below zero because there's a supply glut (and due to aggressive, excessive, or effective subsidies to accelerate our team transition to clean energy).
Is it the national or regional electricity prices that are falling below zero?
Is the price lower during certain hours of the day? If so, then energy storage could probably smooth that out.
Zasper: A Modern and Efficient Alternative to JupyterLab, Built in Go
I am the author of Zasper.
The unique feature of Zasper is that the Jupyter kernel handling is built with Go coroutines and is far superior to how it's done by JupyterLab in Python.
Zasper uses one fourth of RAM and one fourth of CPU used by Jupterlab. While Jupyterlab uses around 104.8 MB of RAM and 0.8 CPUs, Zasper uses 26.7 MB of RAM and 0.2 CPUs.
Other features like Search are slow because they are not refined.
I am building it alone fulltime and this is just the first draft. Improvements will come for sure in the near future.
I hope you liked the first draft.
IPython maintainer and Jupyter dev (even if I barely touch frontend stuff these days). Happy to see diversity, keep up the good work and happy new year. Feel free to open issues upstream if you find lack of documentation or issue with protocol. You can also try to reach to jupyter media strategy team, maybe they'll be open to have a blog post about this on blog.jupyter.org
That's stellar sportmanship right there.
Not that jupyter's team needed even more respect from the community but damn.
I think that's fairly normal, having alternative frontends can only be beneficial to the community. I know it also look like there is a single Jupyter team, but the project is quite large, there are a lot of constraints and disagreements internally and there is not way to accomodate all users in the default jupyter install. Alternative are always welcome ; at least if they don't fragment the ecosystem by being not backward compatible with the default.
Also to be fair I'm also one of the Jupyter dev that agree with many points of OP, and would have pulled it into a different direction; but regardldess I will still support people wanting to go in a different direction than mine.
> Alternative are always welcome; at least if they don't fragment the ecosystem by being not backward compatible with the default.
Genuinely curious; what mechanisms has Jupyter introduced to prevent ecosystem fragmentation?
The Jupyter community maintains a public spec of the notebook file format [1], the kernel protocol [2], etc. I have been involved with many alternative Jupyter clients, and having these specs combined with a friendly and welcoming community is incredibly helpful!!!
[1] https://github.com/jupyter/nbformat
[2] https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
jupyter-server/enterprise_gateway: https://github.com/jupyter-server/enterprise_gateway
JupyterLab supports Lumino and React widgets.
Jupyter Notebook was built on jQuery, but Notebook is now forked from JupyterLab and there's NbClassic.
Breaking the notebook extension API from Notebook to Lab unfortunately caused re-work for progress, as I recall.
jupyter-xeus/xeus is an "Implementation of the Jupyter kernel protocol in C++* https://github.com/jupyter-xeus/xeus
jupyter-xeus/xeus-python is a "Jupyter kernel for the Python programming language"* that's also what JupyterLite runs in WASM instead of ipykernel: https://github.com/jupyter-xeus/xeus-python#what-are-the-adv...
JupyterLite kernels normally run in WASM; which they are compiled to by emscripten / LLVM.
To also host WASM kernels in a go process, I just found: going: https://github.com/fizx/goingo .. https://news.ycombinator.com/item?id=26159440
Vscode and vscode.dev support wasm container runtimes now; so the Python kernel runs in WASM runs in a WASM container runs in vscode FWIU.
Vscode supports polyglot notebooks that run multiple kernels, like "vatlab/sos-notebook" and "minrk/allthekernels". Defining how to share variables between kernels is the more unsolved part AFAIU. E.g. Arrow has bindings for zero-copy sharing in multiple languages.
Cocalc, Zeppelin, Marimo notebook, Data Bricks, Google Colaboratory (Colab tools), and VSCode have different takes on notebooks with I/O in JSON.
There is no CDATA in HTML5; so HTML within an HTML based notebook format would need to escape encode binary data in cell output, too. But the notebook format is not a packaging format. So, for reproducibility of (polyglot) notebooks there must also be a requirements.txt or an environment.yml to indicate the version+platform of each dependency in Python and other languages.
repo2docker (and repo2podman) build containers by installing packages according to the first requirements .txt or environment.yml it finds according to REES Reproducible Execution Environment Standard. repo2docker includes a recent version of jupyterlab in the container.
JupyterLab does not default to HTTPS with an LetsEncrypt self-signed cert but probably should, because Jupyter is a shell that can run commands as the user that owns the Jupyter kernel process.
MoSH is another way to run a web-based remote terminal. Jupyter terminal is not built on MoSH Mobile Shell.
jupyterlab/jupyter-collaboration for real time collaboration is based on the yjs/yjs CRDT. https://github.com/jupyterlab/jupyter-collaboration
Cocalc's Time Slider tracks revisions to all files in a project; including latex manuscripts (for ArXiV), which - with Computer Modern fonts and two columns - are the typical output of scholarly collaboration on a ScholarlyArticle.
At this stage, notebooks should be a GUI powered docker-like image format you download and then click to run.
Non programmers using notebooks are usually the least qualified to make them reproducible, so better just ship the whole thing.
There are packaged installers for the jupyterlab-desktop GUI for Windows, Mac, and Linux: https://github.com/jupyterlab/jupyterlab-desktop#installatio...
Docker Desktop and Podman Desktop are GUIs for running containers on Windows, Mac, and Linux.
containers become out of date quickly.
If programmer or non-programmer notebook authors do not keep versions specified in a requirements.txt upgraded, what will notify other users that they are installing old versions of software?
Are there CVEs in any of the software listed in the SBOM for a container?
There should be tests to run after upgrading notebook and notebook server dependencies.
Notes re: notebooks, reproducibility, and something better than MHTML/ZIP; https://news.ycombinator.com/item?id=35896192 , https://news.ycombinator.com/item?id=35810320
From a JEP proposing "Markdown based notebooks" https://github.com/jupyter/enhancement-proposals/pull/103#is... :
> Any new package format must support cryptographic signatures and ideally WoT identity
Any new package format for jupyter must support multiple languages, because polyglot notebooks may require multiple jupyter kernels.
Existing methods for packaging notebooks as containers and/or as WASM: jupyter-docker-stacks, repo2docker / repo2podman, jupyterlite, container2wasm
You can sign and upload a container image built with repo2docker to any OCI image registry like Docker, Quay, GitHub, GitLab, Gitea; but because Jupyter runs a command execution shell on a TCP port, users should upgrade jupyter to limit the potential for remote exploitation of security vulnerabilities.
> Non programmers using notebooks are usually the least qualified to make them reproducible, so better just ship the whole thing.
Programs should teach idempotency, testing, isolation of sources of variance, and reproducibility.
What should the UI explain to the user?
If you want your code to be more likely to run in the future, you need to add a "package" or a "package==version" string in a requirements.txt (or pyproject.toml, or an environment.yml) for each `import` statement in the code.
If you do not specify the exact versions with `package==version` or similar, when users try to install the requirements to run your notebook, they could get a newer or a different version of a package for a different operating system.
If you want to prevent MITM of package installs, you need to specify a hash for the package for this platform in the requirements.txt or similar; `package==version#sha256=adc123`.
If you want to further limit software supply chain compromise, you must check the cryptographic signatures on packages to install, and verify that you trust that key to sign that package. (This is challenging even for expert users.)
WASM containers that run jupyter but don't expose it on a TCP port may be less of a risk, but there is a performance penalty to WASM.
If you want users to be able to verify that your code runs and has the same output (is "reproducible"), you should include tests to run after upgrading notebook and notebook server dependencies.
NASA, Axiom Space Change Assembly Order of Commercial Space Station
I wonder if there's any useful heavy equipment aboard the ISS that could be transferred to the Axiom prior to separation and thus salvaged. It'd have to be stuff the ISS could do without for the remaining couple years of its life.
I wonder if the ISS could instead be scrapped to the moon.
Let's get this space station to the moon.
Can a [Falcon 9 [Heavy] or similar] rocket shove the ISS from its current attitude into an Earth-Moon orbit with or without orbital refuelling?
The ISS weighs 900,000 lbs on Earth.
Have we yet altered the orbital trajectory of anything that heavy in space?
Can any existing rocket program rendezvous and boost sideways to alter the trajectory of NEOs (Near-Earth Objects) or aging, heirloom, defunct space stations?
Which of the things of ISS that we have internationally paid to loft into orbit would be useful for future robot, human, and emergency operations on the Moon?
NASA evaluated a bunch of options before deciding to de-orbit ISS. You can read a summary here: https://www.nasa.gov/wp-content/uploads/2024/06/iss-deorbit-... (PDF)
About boosting to a higher orbit, they wrote:
"Space station operations require a full-time crew to operate, and as such, an inability to keep crews onboard would rule out operating at higher altitudes. The cargo and crew vehicles that service the space station are designed and optimized for its current 257 mile (415km) altitude and, while the ability of these vehicles varies, NASA’s ability to maintain crew on the space station at significantly higher altitudes would be severely impacted or even impossible with the current fleet. This includes the International crew and cargo fleet, as Russian assets providing propulsion and attitude control need to remain operational through the boost phase.
"Ignoring the requirement of keeping crew onboard, NASA evaluated orbits above the present orbital regime that could extend just the orbital lifetime of the space station. [...]
"However, ascending to these orbits would require the development of new propulsive and tanker vehicles that do not currently exist. While still currently in development, vehicles such as the SpaceX Starship are being designed to deliver significant amounts of cargo to these orbits; however, there are prohibitive engineering challenges with docking such a large vehicle to the space station and being able to use its thrusters while remaining within space station structural margins. Other vehicles would require both new certifications to fly at higher altitudes and multiple flights to deliver propellant.
"The other major consideration when going to a higher altitude is the orbital debris regime at each specified locale. The risk of a penetrating or catastrophic impact to space station (i.e., that could fragment the vehicle) increases drastically above 257miles (415km). While higher altitudes provide a longer theoretical orbital life, the mean time between an impact event decreases from ~51 years at the current operational altitude to less than four years at a 497 mile (800km), ~700-year orbit. This means that the likelihood of an impact leaving station unable to maneuver or react to future threats, or even a significant impact resulting in complete fragmentation, is unacceptably high. NASA has estimated that such an impact could permanently degrade or even eliminate access to LEO for centuries."
Thanks for the research.
How are any lunar orbital trajectories relatively safe given the same risks to all crafts at such altitudes?
Is it mass or thrust, or failure to plan something better than inconsiderately decommissioning into the atmosphere and ocean.
If there are escape windows to the moon for other programs, how are there no escape windows to the moon for the ISS?
Given the standing risks of existing orbital debris and higher-altitude orbits' lack of shielding, are NEO impact collisions with e.g. hypersonic glide delivery vehicles advisable methods for NEO avoidance?
The NEO avoidance need is still to safely rendezvous and shove things headed for earth orbit into a different trajectory;
Is there a better plan than blowing a NEO up into fragments still headed for earth, like rendezvousing and shoving to the side?
m_ISS ~ 4.5e5 kg [1]
Rocket equation [2]:
m_0 = m_f exp(v_delta / v_e)
where
m_f = final mass, i.e. mass of ISS and the boosters
m_0 = m_f + propellant mass
v_delta = velocity change
v_e = effective exhaust velocity of the boosters
Let's try a high-thrust transfer from LEO to the Lunar Gateway's orbit via TLI (Trans-Lunar Injection) [3]:
v_delta = 3.20 + 0.43 = 3.63 km/s
For boosters, let's use the dual-engine Centaur III (because Wikipedia has mass and v_e data for it) [4]:
m_dry = 2462 kg
m_propellant = 20830 kg
v_e = 4.418 km/s
The idea is to attach n of these to the ISS. The rocket equation becomes
m_ISS + n (m_dry + m_propellant) = (m_ISS + n m_dry) exp(v_delta / v_e)
Solve for n:
n = m_ISS (exp(v_delta / v_e) - 1) / (m_propellant + m_dry (1 - exp(v_delta / v_e)) )
Plug in numbers and find
n ~ 32.4
So we need 33 Centaur III (and some way to attach them, which I optimistically assume won't add significantly to the ISS mass).
Total Centaur III + propellant mass: 33 * (2462 + 20830) = 768636 kg
Planned Starship payload capacity to LEO is 2e5 kg [5], so assuming that a way can be found to fit 7 Centaur III in its payload bay, we can get all 33 boosters to LEO with five Starship launches.
Why not use Starship itself? Its Raptor Vacuum engines have lower v_e (~3.7 km/s) [6], and if you want it back, you need to add fuel for the return trip to m_f. Exercise for the reader!
[1] https://en.wikipedia.org/wiki/International_Space_Station
[2] https://en.wikipedia.org/wiki/Tsiolkovsky_rocket_equation
[3] https://en.wikipedia.org/wiki/Delta-v_budget#Earth_Lunar_Gat...
[4] https://en.wikipedia.org/wiki/Centaur_(rocket_stage)
[5] https://en.wikipedia.org/wiki/SpaceX_Starship_(spacecraft)
[6] https://en.wikipedia.org/wiki/SpaceX_Raptor#Raptor_Vacuum
Thanks for the numbers. I think it's still possible to create gists with .ipynb Jupyter notebooks which have can have latex math and code with test assertions; symbolic algebra with sympy, astropy, GIZMO-public, spiceypy
> and some way to attach them
Because of my love for old kitchens on the Moon.
(The cost then to put all of that into orbit, in today's dollars)
So, orbitally refuelling Starship(s) would be less efficient than 33 of the cited capability all at once.
This list is pretty short:
Template:Engine_thrust_to_weight_table: https://en.wikipedia.org/wiki/Template:Engine_thrust_to_weig...
What about solar; could any solar-powered thrusters - given an unlimited amount of time - shove the ISS into a dangerous orbit towards the moon instead of the ocean?
> and some way to attach them
There's a laser welding in space spec and grants FWIU.
Can any space program do robotic spacecraft hull repair in orbit, like R2D2? With laser welding?
Or do we need to find more people like Col. McBride in brad pitt space movie, more astronauts?
Nanoimprint Lithography Aims to Take on EUV
Nanoimprint lithography (NIL) https://en.wikipedia.org/wiki/Nanoimprint_lithography
Feasibility of RF Based Wireless Sensing of Lead Contamination in Soil
"Feasibility of RF Based Wireless Sensing of Lead Contamination in Soil" (2024) [PDF] https://www.ewsn.org/file-repository/ewsn2024/ewsn24-final99...
Multispectral imaging through scattering media and around corners
ScholarlyArticle: "Multispectral imaging through scattering media and around corners via spectral component separation" (2024) https://opg.optica.org/oe/fulltext.cfm?uri=oe-32-27-48786&id...
Multispectral identification of plastics: "Multi-sensor characterization for an improved identification of polymers in WEEE recycling" (2024) https://www.sciencedirect.com/science/article/pii/S0956053X2...
Ore to Iron in a Few Seconds: New Chinese Process Will Revolutionise Smelting
There's also a new process for Titanium production; for removing the oxygen FWIU:
"Direct production of low-oxygen-concentration titanium from molten titanium" (2024) https://www.nature.com/articles/s41467-024-49085-4
Awesome Donations: A repository of FLOSS donation options
Jupyter is now with LF Charities, with a new 501(c) EIN: https://github.com/jupyter/governance/issues/204#issuecommen...
NumFOCUS has Sponsored Projects: https://numfocus.org/sponsored-projects and Affiliated Projects: https://numfocus.org/sponsored-projects/affiliated-projects
Datasette-lite is another way to support client-side search and faceting of data like a list of open source projects with attributes, in YAML-LD in git, but SQLite with sqlite-utils.
FUNDING.yml is the GitHub Sponsors way to support donations:
"How to add a FUNDING.yml to github orgs and org/repos" https://docs.github.com/en/repositories/managing-your-reposi...
Open source projects can link to a "Custom URL" in a GitHub organization or project's FUNDING.yml file to add a "Donate" button.
From https://github.com/sponsors :
> Support the developers who power open source
> button text: See your top dependencies
> button text: Get sponsored
Open Source sponsoring organizations like NumFOCUS, LFX Platform, and LF Charities (Linux Foundation) could crawl GitHub Sponsors FUNDING.yml documents to build Donate pages.
Test as a normal human customer in through the front door like everyone else;
I want to donate with a DAF or a QCD,
How do I find the EIN for a nonprofit corporation in the US?
/? "$project" "EIN"
/? ein donor advised fund :
DAF: Donor Advised Fund
QCD: Qualified Charitable Donation (from an IRA)
DAF pros and cons: [ https://smartasset.com/investing/donor-advised-funds-pros-an... , ]
"Ten of America’s 20 Top Public Charities Are Donor-Advised Funds" (2024) https://inequality.org/article/top-public-charities-dafs/
From https://news.ycombinator.com/item?id=30189240 :
- CharityVest: https://www.charityvest.org/ :
>> Donate cash, stock, crypto, or complex assets to your giving account when it's most tax-efficient. It’s all tax-deductible upon receipt and on one tax statement at year-end.
>> Make employees the authors of your impact story with personal and corporate donor-advised funds.
>> Provide tax-smart charitable stipends to employees, automate matching gifts, and enable equity donations.*
- TheGivingBlock handles (crypto) charitable donations to and for nonprofits: https://thegivingblock.com/
W3C Web Monetization could solve for nonprofit donations too, I think. https://WebMonetization.org/
Web Monetization handles micropayments with low fees (with ILP Interledger Protocol, like FedNow) with an open standard;
Web Monetization > Monetizing a web page: https://webmonetization.org/specification/#monetizing-a-web-... :
<link
rel="monetization"
href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay" target="_blank" rel="nofollow noopener">https://example.com/pay"
onmonetization="sayThanks(this)"
>
<script>
function sayThanks(monetizedLink) {
// Do something here
}
</script>
Maybe some links to obviously relevant and manually tangential wikipedia pages about concepts described herein and herewith:Charitable organization: https://en.wikipedia.org/wiki/Charitable_organization
Philanthropy in the US: https://en.wikipedia.org/wiki/Philanthropy_in_the_United_Sta...
Philanthropy > Criticism, Differences between traditional and new philanthropy > Promoting equity through science and health philanthropy: https://en.wikipedia.org/wiki/Philanthropy#Promoting_equity_...
Nonprofit organization > Fundraising, Problems, tech support: https://en.wikipedia.org/wiki/Nonprofit_organization#Fundrai...
Non-profit technology > Uses: https://en.wikipedia.org/wiki/Non-profit_technology#Uses :
> [Computers and Internets,;] volunteer management and support, donor management, client tracking and support, project management, human resources (paid staff) management, financial accounting, program evaluation, research, marketing, activism and collaboration. Nonprofit organizations that engage in income-generation activities, such as ticket sales, may also use technology for these functions.
Static search trees: faster than binary search
suffix-array-searching/static-search-tree/src/s_tree.rs: https://github.com/RagnarGrootKoerkamp/suffix-array-searchin... partitioned_s_tree.rs: https://github.com/RagnarGrootKoerkamp/suffix-array-searchin...
Arnis: Generate cities in Minecraft from OpenStreetMap
Re: real-time weather, flight, traffic, react-based cockpits, and open source Google Earth GlTF data and flight sim applications like MSFS and X-Plane: https://news.ycombinator.com/item?id=38437821
Probably easier to determine whether a verbal or (lat,long,game) location reference is a reference to an in-game location or an virtual location in a game with the same name.
From https://developers.google.com/maps/documentation/tile/3d-til... re: Google Maps GlTF 3d tiles:
> Note: To render Google's photorealistic tiles, you must use a 3D Tiles renderer that supports the display of copyright attribution. You can use [CesiumJS or Cesium for Unreal]
Open source minecraft games with Python APIs:
> sensorcraft is like minecraft but in python with pyglet for OpenGL 3D; self.add_block(), gravity, ai, circuits
WebGL is more portable than OpenGL; or can't pyglet be compiled to WASM?
panda3d and godot easily compile to WASM FWIU.
An open source physics engine in Python that might be useful for minecraft clones: Genesis-Embodied-AI/Genesis; "Genesis: A Generative and Universal Physics Engine for Robotics and Beyond" https://news.ycombinator.com/item?id=42457213#42457224
From "Approximating mathematical constants using Minecraft" https://news.ycombinator.com/item?id=42319313 :
> [ luanti, minetest, mcpi, MCPI-Revival/minecraft-pi-reborn ]
That was done long ago under Flightgear, but just the maps.
Jimmy Carter has died
Jimmy Carter: https://en.wikipedia.org/wiki/Jimmy_Carter
Many analyses have since estimated the error of the projections they were conducting policy according to especially in the late 1970s.
"The Limits to Growth" (1971) https://en.wikipedia.org/wiki/The_Limits_to_Growth
Oil price shock (1973) https://en.wikipedia.org/wiki/1973_oil_crisis
Oil crisis (1979) https://en.wikipedia.org/wiki/1979_oil_crisis
"The Global 2000 Report to the President" (1980) https://en.wikipedia.org/wiki/The_Global_2000_Report_to_the_...
1980-: expensive international meddling we're still paying for, increased government spending , increased taxes on the middle class, oil governor, oil dci, vp, president
1992-: significant reprieve from costly meddling, globalization, soccer, fallout from arms dropped on the eastern bloc after the end of the cold war, not much development in batteries or renewables, dot-com boom, WorldCom, GLBA deregulation, dot-com crash
2000s: humvees, hummers and ~18 MPG SUVs, debt (war expensed to national debt, tax cuts), oil commodity price volatility given an international production rate agreement, crony capitalist bailouts to top off the starve the beast scorched earth debt, almost 20 years of war with countries initially engaged in the 1980s, trillions in spending, tax cuts
"The Limits to Growth" (2004) https://en.wikipedia.org/wiki/The_Limits_to_Growth :
> "Limits to Growth: The 30-Year Update" was published in 2004. The authors observed that "It is a sad fact that humanity has largely squandered the past 30 years in futile debates and well-intentioned, but halfhearted, responses to the global ecological challenge. We do not have another 30 years to dither. Much will have to change if the ongoing overshoot is not to be followed by collapse during the twenty-first century."
Hyperspectral sensors help characterize polymers in e-waste to improve recycling
"Multi-sensor characterization for an improved identification of polymers in WEEE recycling" (2024) https://www.sciencedirect.com/science/article/pii/S0956053X2...
Advancing unidirectional heat flow: The next era of quantum thermal diodes
ScholarlyArticle: "Enhanced thermal rectification in coupled qutrit–qubit quantum thermal diode" (2024) https://pubs.aip.org/aip/apq/article/1/4/046123/3325095/Enha...
Unforgeable Quantum Tokens Delivered over Fiber Network
I am worried about the future of quantum tokens...
Whilst theoretically they are secure, I worry about potential huge side-channels allowing leaking of the key...
All it takes is a few extra photons emitted at some harmonic frequency for the key to be leaked...
I would much prefer dumb hardware and clever digital software, because at least software is much easier to secure against side channels, and much easier to audit.
Isn't the entire security of Quantum Communication predicated on its complete lack of side-channels due to the fact that measuring quantum systems collapses their wave function?
Yes, in theory. In practice, photon generators won't behave perfectly. There are lots of possible attacks, like photon splitting [1].
[1] https://onlinelibrary.wiley.com/doi/full/10.1002/qute.202300...
Once you put error correction, doenn't you lose all the nice properties of the non cloning theorem? If the protocol tolerates 30% of errors, doesn't it tolerate 30% of MITM? (60%??)
You don't need error correction for some crypto primitives. There are QKD networks deployed that don't have that kind of error correction, as far as I know.
How can QKD repeaters store and forward or just forward without collapsing phase state?
How does photonic phase state collapse due to fiber mitm compare to a heartbeat on a classical fiber?
There is quantum counterfactual communication without entanglement FWIU? And there's a difference between QND "Quantum Non-Demolition" and "Interaction-free measurement"
From https://news.ycombinator.com/item?id=41480957#41533965 :
>> IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)
> That was probably the "Bell test" article, which - IIUC - does indeed indicate that if you can read 62% of the photons you are likely to find a loophole-free violation
> [ "Violation of Bell inequality by photon scattering on a two-level emitter", ]
Bell test > Detection loophole: https://en.wikipedia.org/wiki/Bell_test#Detection_loophole :
> when using a maximally entangled state and the CHSH inequality an efficiency of η>2sqrt(2)−2≈0.83 is required for a loophole-free violation.[51] Later Philippe H. Eberhard showed that when using a partially entangled state a loophole-free violation is possible for
η>2/3≈0.67,[52] which is the optimal bound for the CHSH inequality. [53] Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for η>(5−1)/2≈0.62 [54]
Isn't modern error detection and classical PQ sufficient to work with those odds?
> Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions, [55] superconducting qubits, [56] and nitrogen-vacancy centers. [57] These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, [30][31] and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems. [10]
Portspoof: Emulate a valid service on all 65535 TCP ports
How does this compare to a tarpit?
Tarpit (networking) https://en.wikipedia.org/wiki/Tarpit_(networking)
/? inurl:awesome tarpit https://www.google.com/search?q=inurl%3Aawesome+tarpit+site%...
"Does "TARPIT" have any known vulnerabilities or downsides?" https://serverfault.com/questions/611063/does-tarpit-have-an...
https://gist.github.com/flaviovs/103a0dbf62c67ff371ff75fc62f... :
> However, if implemented incorrectly, TARPIT can also lead to resource exhaustion in your own server, specifically with the conntrack module. That's because conntrack is used by the kernel to keep track of network connections, and excessive use of conntrack entries can lead to system performance issues, [...]
> The script below uses packet marks to flag packets candidate for TARPITing. Together with the NOTRACK chain, this avoids the conntrack issue while keeping the TARPIT mechanism working.
The tarpit module used to be in tree.
xtables-addons/ xt_TARPIT.c: https://github.com/tinti/xtables-addons/blob/master/extensio...
Haven't looked into this too deeply but there is a difference between delaying a response (requests get stuck in the tarpit) vs providing a useless but valid response. This approach always provides a response, so it uses more resources than ignoring the request, but less resources than keeping the connection open. Once the response is sent the connection can be closed, which isn't quite how a tarpit behaves. The Linux kernel only needs to track open requests in memory so if connections are closed, they can be removed from the kernel and thus use no more resources than a standard service listening on a port.
There is a small risk in that the service replies to requests on the port, though, as replies get more complicated to mimic services, you run the risk of an attacked exploiting the system making the replies. Another way of putting it, this attempts to run a server that responds to incoming requests on every port, in a way that mimics what might run on each port. If so, it technically opens up an attack surface on every port because an attacker can feed it requests but the trade-off is that it runs in user mode and could be granted nil permissions or put on a honeypot machine that is disconnected from anything useful and heavily tripwired for unusual activity. And the approach of hardcoding a response to each port to make it appear open is itself a very simple activity, so the attack surface introduced is minimal while the utility of port scanning is greatly reduced. The more you fake out the scanning by behaving realistically to inputs, the greater the attack surface to exploit, though.
And port scanning can trigger false postives in network security scans which can then lead to having to explain why the servers are configured this way and that some ports that should always be closed due to vulnerability are open but not processing requests, so they can be ignored, etc.
The original Labrea Tarpit avoids DOS'ing it's own conntrack table somehow, too;
LaBrea.py: https://github.com/dhoelzer/ShowMeThePackets/blob/master/Sca...
La Brea Tar Pits and museum: https://en.wikipedia.org/wiki/La_Brea_Tar_Pits
The NERDctl readme says: https://github.com/containerd/nerdctl :
> Supports rootless mode, without slirp overhead (bypass4netns)
How does that work, though? (And unfortunately podman replaced slirp4netns with pasta from psst.)
rootless-containers/bypass4netns: https://github.com/rootless-containers/bypass4netns/ :
> [Experimental] Accelerates slirp4netns using SECCOMP_IOCTL_NOTIF_ADDFD. As fast as `--net=host`
Which is good, because --net=host with rootless containers is security inadvisable FWIU.
"bypass4netns: Accelerating TCP/IP Communications in Rootless Containers" (2023) https://arxiv.org/abs/2402.00365 :
> bypass4netns uses sockets allocated on the host. It switches sockets in containers to the host's sockets by intercepting syscalls and injecting the file descriptors using Seccomp. Our method with Seccomp can handle statically linked applications that previous works could not handle. Also, we propose high-performance rootless multi-node communication. We confirmed that rootless containers with bypass4netns achieve more than 30x faster throughput than rootless containers without it
RunCVM, Kata containers, GVisor all have a better host/guest boundary than rootful or rootless containers; which is probably better for honeypot research on a different subnet.
IIRC there are various utilities for monitoring and diffing VMs, for honeypot research.
There could be a list of expected syscalls. If the simulated workload can be exhaustively enumerated, the expected syscalls are known ahead of time and so anomaly detection should be easier.
"Oh, like Ghostbusters."
I tried something like that. It didn't work because the application added the socket to an epoll set before binding it, so before it could be replaced with a host socket. Replacing the file descriptor in the FD table doesn't replace it in epoll sets.
Show HN: Llama 3.1 8B CPU Inference in a Browser via WebAssembly
This is a demo of what's possible to run on edge devices using SOTA quantization. Other similar projects that try to run 8B models in browser are either using webgpu or 2 bit quantization that breaks the model. I implemented inference of AQLM quantized representation, making model that has 2 bit quantization and does not blow up.
Could this be done with WebNN? If not, how should the spec change?
Re: WebNN, window.ai, navigator.ml: https://news.ycombinator.com/item?id=40834952
Not really a WebNN expert, but looks like it doesn't support CPU inference yet and only works in Chrome. It also lacks support for custom kernels, which we need for running AQLM-quantized models.
When you ask about spec change, do you mean WebNN spec or something else?
There's WebNN, and WebGPU, but no WebTPU; so I guess WebNN would be it
"Web Neural Network API" W3C Candidate Recommendation Draft https://www.w3.org/TR/webnn/
"WebGPU" W3C Candidate Recommendation Snapshot: https://www.w3.org/TR/webgpu/
WebGPU: https://en.wikipedia.org/wiki/WebGPU
Tested koboldcpp with a 7B GGUF model because it should work with 8Gb VRAM; but FWIU somehow kobold pages between RAM and GPU VRAM. How does the user, in WASM, know that they have insufficient RAM for a model?
Twisted light: The Edison bulb has purpose again
ScholarlyArticle: "Bright, circularly polarized black-body radiation from twisted nanocarbon filaments" (2024) https://www.science.org/doi/10.1126/science.adq4068
Toward testing the quantum behavior of gravity: A photonic quantum simulation
"Photonic implementation of quantum gravity simulator" (2024) https://www.spiedigitallibrary.org/journals/advanced-photoni...
Relativistic quantum fields are universal entanglement embezzlers
"Relativistic quantum fields are universal entanglement embezzlers" (2024) https://journals.aps.org/prl/accepted/32073Yd2G141638436cb80... :
> Abstract: Embezzlement of entanglement refers to the counterintuitive possibility of extracting entangled quantum states from a reference state of an auxiliary system (the “embezzler”) via local quantum operations while hardly perturbing the latter. We uncover a deep connection between the operational task of embezzling entanglement and the mathematical classification of von Neumann algebras. Our result implies that relativistic quantum fields are universal embezzlers: Any entangled state of any dimension can be embezzled from them with arbitrary precision. This provides an operational characterization of the infinite amount of entanglement present in the vacuum state of relativistic quantum field theories.
"Quantum entanglement can be endlessly 'embezzled' from quantum fields" (2024) https://www.newscientist.com/article/2461485-quantum-entangl...
[Note: It's not necesary to make a second comment. You can edit your post for 1 or 2 hours. (I never remember.) If you change the comment too much, add a note, specialy if someone replied to your comment.]
Edit: Just like this. For obvious typos I usualy don't add the note anyway.
CLI tool to insert spacers when command output stops
- In Python, Sarge has code to poll stdout and stderr
- Years ago for timesheets, I created a "gap report" that finds when the gap between the last dated events exceeds a threshold.
- The vscode terminal indicates which stdout, stderr are from which process with a circle .
This says it's "Integrated > Shell Integration: Decorations Enabled" to configure the feature; https://stackoverflow.com/questions/73242939/vscode-remove-c...
- Are there other shells that indicate which stdout and stderr are from which process? What should it do about e.g. reset ANSI shell escape sequences?
Stopping by Woods on a Snowy Evening (1923)
"Stopping by Woods on a Snowy Evening" (1922) https://en.wikipedia.org/wiki/Stopping_by_Woods_on_a_Snowy_E...
WaveOrder: Generalist framework for label-agnostic computational microscopy
"waveOrder: generalist framework for label-agnostic computational microscopy" (2024) https://arxiv.org/abs/2412.09775 :
> Abstract: Correlative computational microscopy is accelerating the mapping of dynamic biological systems by integrating morphological and molecular measurements across spatial scales, from organelles to entire organisms. Visualization, measurement, and prediction of interactions among the components of biological systems can be accelerated by generalist computational imaging frameworks that relax the trade-offs imposed by multiplex dynamic imaging. This work reports a generalist framework for wave optical imaging of the architectural order (waveOrder) among biomolecules for encoding and decoding multiple specimen properties from a minimal set of acquired channels, with or without fluorescent labels. waveOrder expresses material properties in terms of elegant physically motivated basis vectors directly interpretable as phase, absorption, birefringence, diattenuation, and fluorophore density; and it expresses image data in terms of directly measurable Stokes parameters. We report a corresponding multi-channel reconstruction algorithm to recover specimen properties in multiple contrast modes. With this framework, we implement multiple 3D computational microscopy methods, including quantitative phase imaging, quantitative label-free imaging with phase and polarization, and fluorescence deconvolution imaging, across scales ranging from organelles to whole zebrafish. These advances are available via an extensible open-source computational imaging library, waveOrder, and a napari plugin, recOrder.
mehta-lab/waveorder: https://github.com/mehta-lab/waveorder :
> Wave optical models and inverse algorithms for label-agnostic imaging of density & orientation
> wave-optical simulation and reconstruction of optical properties that report microscopic architectural order.
mehta-lab/recOrder: https://github.com/mehta-lab/recOrder .. https://www.napari-hub.org/plugins/recOrder-napari :
> 3D quantitative label-free imaging with phase and polarization
> recOrder is a collection of computational imaging methods. It currently provides QLIPP (quantitative label-free imaging with phase and polarization), phase from defocus, and fluorescence deconvolution.
Cognitive load is what matters [Programming]
I think the effects of cognitive load on code readability can be demonstrated with variable and function names:
phi
s
ses
session
cursor_session
database_session
dbs
db.session
Really short variable names reduce the cognitive burden for users that understand that "phi" means "database session", but frustrate users who aren't contextually familiar.Really long variable names make it too difficult to verbalize and thus comprehend code.
Is 7±2 a practical limit to working memory for seasoned coders, though?
First demonstration of quantum teleportation over busy Internet cables
ScholarlyArticle: "Quantum teleportation coexisting with classical communications in optical fiber" (2024) https://arxiv.org/abs/2404.10738
Tracing packets in the Linux kernel networking stack and friends
> When storing events for later post-processing, the packets' journeys can be reconstructed: [...]
> Retis offers many more features including retrieving conntrack information, advanced filtering, monitoring dropped packets and dropped packets from Netfilter, generating pcap files from the collected packets, allowing writing post-processing scripts in Python and more.
Would syntax highlighting be a useful general feature, or should that be a post-processing script in e.g. Python?
It would definitely be useful. This is part of the plan and we started exploring different possibilities (early stage, at the moment). Thank you for the feedback and for filing the feature request on GH.
Hey np. Also,
Wireshark and also tshark iirc support custom protocol dissectors;
"How can I add a custom protocol analyzer to wireshark?" https://stackoverflow.com/questions/4904991/how-can-i-add-a-...
What can the pcap files contain?
the pcap-ng file will contain packets (l2/l3/l4 and so forth, but up to 255 bytes), each annotated with a comment that tells you from what kernel function or tracepoint the packet was "captured" from. For the time being you can generate pcapng files filtering packets based on a single probe (e.g. all filtered/tracked packets hitting `net:net_dev_start_xmit`). You can then use wireshark (or any tool you prefer) to dissect and further process. Custom dissectors should not be required.
Can Wireshark parse comments in pcapng files, even with a dissector?
/? ' https://www.google.com/search?q=Can+Wireshark+parse+comments... :
frame.comment contains "Your string"
And there's apparently a way to add a custom column to display frame.comment from pcapng traces in wiresharkyep. For Wireshark/Tshark, display filters (including "frame.comment contains ...") can be used as usual. Of course, if your pcap file contains only frames with the same comment, that expression is not particularly useful, but you can merge multiple files with e.g. mergecap.
The pcap subcommand, though, will be extended to allow extracting packets from multiple probes in a single run.
Show HN: Gribstream.com – Historical Weather Forecast API
Hello! I'd like share about my sideproject https://gribstream.com
It is an API to extract weather forecasting data from the National Blend of Models (NBM) https://vlab.noaa.gov/web/mdl/nbm and the Global Forecast System (GFS) https://www.ncei.noaa.gov/products/weather-climate-models/gl... . The data is freely available from AWS S3 in grib2 format which can be great but also really hard (and resource intensive) to work with, especially if you want to extract timeseries over long periods of time based on a few coordinates. Being able to query and extract only what you want out of terabytes of data in just an http request is really nice.
What is cool about this dataset is that it has hourly data with full forecast history so you can use the dataset to train and forecast other parameters and have proper backtesting because you can see the weather "as of" points in time in the past. It has a free tier so you can play with it. There is a long list of upcoming features I intend to implement and I would very much appreciate both feedback on what is currently available and on what features you would be most interested in seeing. Like... I'm not sure if it would be better to support a few other datasets or focus on supporting aggregations.
Features include:
- A free tier to help you get started - Full history of weather forecasts - Extract timeseries for thousands of coordinates, for months at a time, at hourly resolution in a single http request taking only seconds. - Supports as-of/time-travel, indispensable for proper backtesting of derivative models - Automatic gap filling of any missing data with the next best (most recent) forecast.
Please try it out and let me know what you think :)
In my experience, NWP data is big. Like, really big! Data over HTTP calls seems to limit the use case a bit; have you considered making it possible to mount storage directly (fsspec), and use fx the zarr format? In this way, querying with xarray would be much more flexible
Why is weather data stored in netcdf instead of tensors or sparse tensors?
Also, SQLite supports virtual tables that can be backed by Content Range requests; https://www.sqlite.org/vtab.html
sqlite-wasm-http, sql.js-httpvfs; HTTP VFS: https://www.npmjs.com/package/sqlite-wasm-http
sqlite-parquet-vtable: https://github.com/cldellow/sqlite-parquet-vtable
Could there be a sqlite-netcdf-vtable or a sqlite-gribs-vtable, or is the dimensionality too much for SQLite?
From https://news.ycombinator.com/item?id=31824578 :
> It looks like e.g. sqlite-parquet-vtable implements shadow tables to memoize row group filters. How does JOIN performance vary amongst sqlite virtual table implementations?
https://news.ycombinator.com/item?id=42264274
SpatialLite does geo vector search with SQLite.
datasette can JOIN across multiple SQLite databases.
Perhaps datasette and datasette-lite could support xarray and thus NetCDF-style multidimensional arrays in WASM in the browser with HTTP Content Range requests to fetch and cache just the data requested
"The NetCDF header": https://climateestimate.net/content/netcdfs-and-basic-coding... :
> The header can also be used to verify the order of dimensions that a variable is saved in (which you will have to know to use, unless you’re using a tool like xarray that lets you refer to dimensions by name) - for a 3-dimensional variable, `lon,lat,time` is common, but some files will have the `time` variable first.
"Loading a subset of a NetCDF file": https://climateestimate.net/content/netcdfs-and-basic-coding...
From https://news.ycombinator.com/item?id=42260094 :
> xeus-sqlite-kernel > "Loading SQLite databases from a remote URL" https://github.com/jupyterlite/xeus-sqlite-kernel/issues/6#i...
%FETCH <url> <filename>
> Why is weather data stored in netcdf instead of tensors or sparse tensors?
NetCDF is a "tensor", at least in the sense of being a self-describing multi-dimensional array format. The bigger problem is that it's not a Cloud-Optimized format, which is why Zarr has become popular.
> Also, SQLite supports virtual tables that can be backed by Content Range requests
The multi-dimensional equivalent of this is "virtual Zarr". I made this library to create virtual Zarr stores pointing at archival data (e.g. netCDF and GRIB)
https://github.com/zarr-developers/VirtualiZarr
> xarray and thus NetCDF-style multidimensional arrays in WASM in the browser with HTTP Content Range requests to fetch and cache just the data requested
Pretty sure you can do this today already using Xarray and fsspec.
In theory it could be done. It is sort of analogous to what GribStream is doing already.
The grib2 files are the storage. They are sorted by time in the path and so that is used like a primary index. And then grib2 is just a binary format to decode to extract what you want.
I originally was going to write this as a plugin for Clickhouse but in the end I made it a Golang API cause then I'm less constrained to other things. Like, for example, I'd like to create and endpoint to live encode the gribfiles into MP4 so the data can be served as Video. And then with any video player you would be able to playback, jump to times, etc.
I might still write a clickhouse integration though because it would be amazing to join and combine with other datasets on the fly.
> It is sort of analogous to what GribStream is doing already.
The difference is presumably that you are doing some large rechunking operation on your server to hide from the user the fact that the data is actually in multiple files?
Cool project btw, would love to hear a little more about how it works underneath :)
Yeah, exactly.
I basically scrape all the grib index files to know all the offsets into all variables for all time. I store that in clickhouse.
When the API gets a request for a time range, set of coordinates and a set of weather parameters, first I pre-compute the mapping of (lat,lon) into the 1 dimensional index in the gridded data. That is a constant across the whole dataset. Then I query the clickhouse table to find out all the files+offset that need to be processed and all of them are queued into a multi-processing pool. And then processing each parameter implies parsing a grib file. I wrote a grib2 parser from scratch in golang so as to extract the data in a streaming fashion. As in... I don't extract the whole grid only to lookup the coordinates in it. I already pre-computed the index, so I can just decode every value in the grid in order and when I hit and index that I'm looking for, I copy it to a fixed size buffer with the extracted data. When You have all the pre-computed indexes then you don't even need to finish downloading the file, I just drop the connection immediately.
It is pretty cool. It is running in very humble hardware so I'm hoping I'll get some traction so I can throw more money into it. It should scale pretty linearly.
I've tested doing multi-year requests and the golang program never goes over 80Mb of memory usage. The CPUs get pegged so that is the limiting factor.
Grib2 complex packing (what the NBM dataset uses) implies lots of bit-packing. So there is a ton more to optimize using SIMD instructions. I've been toying with it a bit but I don't want to mission creep into that yet (fascinating though!).
I'm tempted to port this https://github.com/fast-pack/simdcomp to native go ASM.
Just a heads up that there is a similar Python package for this that is free called Herbie [1], though the syntax of this one looks a little easier to use.
blaylockbk/Herbie: https://github.com/blaylockbk/Herbie :
> Download numerical weather prediction datasets (HRRR, RAP, GFS, IFS, etc.) from NOMADS, NODD partners (Amazon, Google, Microsoft), ECMWF open data, and the Pando Archive System
The Herbie docs mention a "GFS GraphCast" but not yet GenCast? https://herbie.readthedocs.io/en/stable/gallery/noaa_models/...
"GenCast predicts weather and the risks of extreme conditions with state-of-the-art accuracy" (2024) https://deepmind.google/discover/blog/gencast-predicts-weath...
"Probabilistic weather forecasting with machine learning" (2024) ; GenCast paper https://www.nature.com/articles/s41586-024-08252-9
google-deepmind/graphcast: https://github.com/google-deepmind/graphcast
Are there error and cost benchmarks for these predictive models?
'Profound consequences': NZ scientists make 'dark energy' breakthrough
"Supernovae evidence for foundational change to cosmological models" (2024) https://academic.oup.com/mnrasl/article/537/1/L55/7926647
Borrow Checking, RC, GC, and Eleven Other Memory Safety Approaches
From https://news.ycombinator.com/item?id=33560227#33563857 :
- Type safety > Memory management and type safety: https://en.wikipedia.org/wiki/Type_safety#Memory_management_...
- Memory safety > Classification of memory safety errors: https://en.wikipedia.org/wiki/Memory_safety#Classification_o...
- Template:Memory management https://en.wikipedia.org/wiki/Template:Memory_management
- Category:Memory_management https://en.wikipedia.org/wiki/Category:Memory_management
Ask HN: Are you paying electricity bills for your service?
Hi HN, We are an early-stage startup working on OS-level AI energy optimization. Our goal is to revolutionize the way systems manage power consumption at the OS level making AI and compute more energy-efficient and sustainable. Our initial prototype has demonstrated significant saving (20%+) without application/model changes.
We are looking to connect with individuals who have exposure to energy costs, energy purchasing or a strong desire to optimize them at the server and datacenter level. Whether you are working in data centers, telco, AI infra, or any area where energy efficiency is crucial, we would love to discuss potential challenges, solutions, and opportunities in this space. We're eager to learn from your experiences and explore ways we can collaborate to make a meaningful impact. Drop us a note at scottcha@live.com and we'll connect up.
Thanks Scott & Chad
Various solutions for (incentivizing) energy efficiency:
- https://app.electricitymaps.com/map/24h
- "First open-source global energy system model" https://news.ycombinator.com/item?id=39333285
- There aren't intraday electricity prices in most markets in the United States. (You must have intraday electricity prices to be an EU Member State, by comparison.). Wouldn't intraday prices incentivize energy storage solutions and energy efficiency?
- "Ask HN: Does mounting servers parallel with the temperature gradient trap heat?" (2020) https://news.ycombinator.com/item?id=23033210
- "Ask HN: How to reuse waste heat and water from AI datacenters?" (2024) https://news.ycombinator.com/item?id=40820952
- "New HPE fanless cooling achieves a 90% reduction in cooling power consumption" (2024) https://news.ycombinator.com/item?id=42106715
- "Next-generation datacenters consume zero water for cooling" (2024) https://news.ycombinator.com/item?id=42376406
- "Three New Supercomputers Reach Top of Green500 List" (2024) https://news.ycombinator.com/item?id=40473656
/? costed opcodes (smart contracts, eWASM,)
Thanks, Yes I'm a long time electricity maps customer. I also agree that the EU markets data transparency is further along in helping incentivize opportunities here.
Also thanks for the other links!
It looks like they have data for the United States now.
We have public utilities that must raise bonds to grow to scale with demand, and some competitive electrical energy markets with competitive margin and efficiency incentives in the US.
FWIU, datacenters are unable to sell their waste heat, boiled sterilized steam and water, unused diesel, and potentially excess energy storage.
To sell energy back to the grid to help solve the Duck curve and Alligator curve problems requires a smart grid 2.0 or a green field without much existing infrastructure to cross over or under.
To sell datacenter waste heat through a pipe under the street to the building next door, you must add some heat.
Nonprofit org opportunity: Screenplay wherein {{Superhero}} explains this and other efficiency and sustainability opportunities to the market at large, perhaps with comically excessive product placement for which charity or charities and physics with 3D CG
"Solar thermal trapping at 1,000°C and above" (2024) should be enough added heat to move waste datacenter heat to a compatible adjacent facility.
Sand batteries hold more heat than water, in certain thermal conditions.
Algae farms can eat CO2 and Heat, for example.
Cooling towers waste heat and water as steam.
Nonprofit org opportunity: Directory with for-charity sponsored lists of renewable energy products and services, with regions of operation; though there's a way to list schema:LocalBusiness and their schema:Offer (s) for products and services with rdfs:Class and rdfs:Property for search engines to someday also index
Business opportunity: managed renewable energy service to quote solar/wind/hydro/geo/fauna/insulation site plans [for nonprofit organizations], with support contracts and safety inspection and also crew management
The Landauer limit is presumed to be a real limit to electronic computation: changing a 1 to a 0 releases energy that can't be sent out on the negative so it's wasted as thermal energy.
Photonic computing, graphene computing, counterfactual computing, superconductors, and probably edge chiral currents do or will eventually do more computation per unit of energy.
Ops per wHr metrics quantify the difference.
Jupyter Notebooks as E2E Tests
You can just use nbclient/nbconvert as a library instead of on cmd. Execute the notebook, and the cell outputs are directly available as python objects [1]. Error handling is also inbuilt.
This should make integration with pytest etc, much simpler.
[1] https://nbconvert.readthedocs.io/en/latest/execute_api.html#...
There are a number of ways to run tests within notebooks and to run notebooks as tests.
ipytest: https://github.com/chmp/ipytest
nbval: https://github.com/computationalmodelling/nbval
papermill: https://github.com/nteract/papermill
awesome-jupyter > Testing: https://github.com/markusschanta/awesome-jupyter
"Is there a cell tag convention to skip execution?" https://discourse.jupyter.org/t/is-there-a-cell-tag-conventi...
Methods for importing notebooks like regular Python modules: ipynb, import_ipnb, nbimporter, nbdev
nbimporter README on why not to import notebooks like modules: https://github.com/grst/nbimporter#update-2019-06-i-do-not-r...
nbdev parses jupyter notebook cell comments to determine whether to export a module to a .py file: https://nbdev1.fast.ai/export.html
There's a Jupyter kernel for running ansible playbooks as notebooks with output.
dask-labextension does more of a retry-on-failure workflow
Biodegradable aerogel: Airy cellulose from a 3D printer
ScholarlyArticle: "Additive Manufacturing of Nanocellulose Aerogels with Structure-Oriented Thermal, Mechanical, and Biological Properties" (2024) https://onlinelibrary.wiley.com/doi/10.1002/advs.202307921
From https://news.ycombinator.com/item?id=13912857 :
> “After we 3-D print, we restore the hydrogen bonding network through a sodium hydroxide treatment,” Pattinson says. “We find that the strength and toughness of the parts we get … are greater than many commonly used materials” for 3-D printing, including acrylonitrile butadiene styrene (ABS) and polylactic acid (PLA).
From https://news.ycombinator.com/item?id=41314454 :
> "Plant-based epoxy enables recyclable carbon fiber" (2022) [that's stronger than steel and lighter than fiberglass] https://news.ycombinator.com/item?id=30138954 ... https://news.ycombinator.com/item?id=37560244
"Scientists convert bacteria into efficient cellulose producers" (2024) https://news.ycombinator.com/item?id=41175002
Do hemp and cellulose 3d-printing filaments emit less VOCs?
"Japanese Joinery, 3D printing, and FreeCAD" (2022) https://news.ycombinator.com/item?id=36191589
We developed a way to use light to dismantle PFAS 'forever chemicals'
A recycled story from a month ago (21 points, 1 comment) https://news.ycombinator.com/item?id=42199625
Which seems to be related to a different June study Low-Cost Method to Decompose PFAS Using LED Light and Semiconductor Nanocrystals (29 points, 5 months ago, 26 comments) https://news.ycombinator.com/item?id=41062885
Distinctly, this is a subsequent article by the authors published by phys.org
"Photocatalytic C–F bond activation in small molecules and polyfluoroalkyl substances" (2024) https://www.nature.com/articles/s41586-024-08327-7 :
> Abstract: [...] Here, we report an organic photoredox catalyst system that can efficiently reduce C–F bonds to generate carbon-centered radicals, which can then be intercepted for hydrodefluorination (swapping F for H) and cross-coupling reactions. This system enables the general use of organofluorines as synthons under mild reaction conditions. We extend this method to the defluorination of polyfluoroalkyl substances (PFAS) and fluorinated polymers, a critical challenge in the breakdown of persistent and environmentally damaging forever chemicals.
Natural Number Game: build the basic theory of the natural numbers from scratch
I’ve really been getting into Lean, and these tutorials are fantastic. It’s always interesting to see how many hidden assumptions are made in short and informal mathematical proofs that end up becoming a (very long) sequence of formal transformations when all of the necessary steps are written out explicitly.
My guess is that mathematical research utilizing these systems is probably the future of most of the field (perhaps to the consternation of the old guard). I don’t think it will replace human intuition, but the hope is that it will augment it in ways we didn’t expect or think about. Terence Tao has a great blog post about this, specifically about how proof assistants can improve collaboration.
On another note, these proof systems always prompt me to think about the uncomfortable paradox that underlies mathematics, namely, that even the simplest of axioms cannot be proven “ex nihilo” — we merely trust that a contradiction will never arise (indeed, if the axioms corresponding to the formal system of a sufficiently powerful theory are actually consistent, we cannot prove that fact from within the system itself).
The selection of which axioms (or finite axiomatization) to use is guided by... what exactly then? Centuries of human intuition and not finding a contradiction so far? That doesn’t seem very rigorous. But then if you try to select axioms probabilistically (e.g., via logical induction), you end up with a circular dependency, because the predictions that result are only as rigorous as the axioms underlying the theory of probability itself.
To be clear, I’m not saying anything actual mathematicians haven’t already thought about (or particularly care about that much for their work), but for a layperson, the increasing popularity of these formal proof assistants certainly brings the paradox to the forefront.
The selection of which axioms to use is an interesting topic. Historically, axioms were first used by Hilbert and cie. while trying to progress in set theory. The complexity of the topic naturally led them to formalize their work, and thus arose ZF(C) [1], which later stuck as the de facto basis of modern mathematics.
Later systems, like Univalent Fundations [2] came from some limitations of ZFC and the desire to have a set of axioms that is easier to work with (for e.g. computer proof assistants). The choice of any new systems of axioms is ultimately limited by the scope of what ZFC can do, so as to preserve the past two centuries of mathematical works.
[1] https://en.wikipedia.org/wiki/Zermelo-Fraenkel_set_theory
Homotopy type theory (HoTT) https://en.wikipedia.org/wiki/Homotopy_type_theory
/? From a set like {0,1} to a wave function of reals in Hilbert space [to Constructor Theory and Quantum Counterfactuals] https://www.google.com/search?q=From+a+set+like+%7B0%2C1%7D+... , https://www.google.com/search?q=From+a+set+like+%7B0%2C1%7D+...
From "What do we mean by "the foundations of mathematics"?" (2023) https://news.ycombinator.com/item?id=38102096#38103520 :
> HoTT in CoQ: Coq-HoTT: https://github.com/HoTT/Coq-HoTT
>>> The HoTT library is a development of homotopy-theoretic ideas in the Coq proof assistant. It draws many ideas from Vladimir Voevodsky's Foundations library (which has since been incorporated into the UniMath library) and also cross-pollinates with the HoTT-Agda library. See also: HoTT in Lean2, Spectral Sequences in Lean2, and Cubical Agda.
leanprover/lean2 /hott: https://github.com/leanprover/lean2/tree/master/hott
Lean4:
"Theorem Proving in Lean 4" https://lean-lang.org/theorem_proving_in_lean4/
Learnxinimutes > Lean 4: https://learnxinyminutes.com/lean4/
/? Hott in lean4 https://www.google.com/search?q=hott+in+lean4
> What is the relation between Coq-HoTT & Homotopy Type Theory and Set Theory with e.g. ZFC?
The Equal Rights Amendment is already part of the U.S. Constitution
Equal Rights Amendment > Recent Congressional action: https://en.wikipedia.org/wiki/Equal_Rights_Amendment#Recent_...
Declaration of Independence: https://www.archives.gov/founding-docs/declaration :
> We hold these truths to be self-evident, that all men are created equal,
Fifth Amendment to the United States Constitution: https://en.wikipedia.org/wiki/Fifth_Amendment_to_the_United_... :
> Like the Fourteenth Amendment, the Fifth Amendment includes a due process clause stating that no person shall "be deprived of life, liberty, or property, without due process of law". The Fifth Amendment's Due Process Clause applies to the federal government, while the Fourteenth Amendment's Due Process Clause applies to state governments (and by extension, local governments). The Supreme Court has interpreted the Fifth Amendment's Due Process Clause to provide two main protections: procedural due process, which requires government officials to follow fair procedures before depriving a person of life, liberty, or property, and substantive due process, which protects certain fundamental rights from government interference. The Supreme Court has also held that the Due Process Clause contains a prohibition against vague laws and an implied equal protection requirement similar to the Fourteenth Amendment's Equal Protection Clause.
Fourteenth Amendment: https://en.wikipedia.org/wiki/Fourteenth_Amendment_to_the_Un...
Fourteenth Amendment > Equal Protection Clause: https://en.wikipedia.org/wiki/Equal_Protection_Clause :
> The Equal Protection Clause is part of the first section of the Fourteenth Amendment to the United States Constitution. The clause, which took effect in 1868, provides "nor shall any State ... deny to any person within its jurisdiction the equal protection of the laws."
Equal justice under law: https://en.wikipedia.org/wiki/Equal_justice_under_law
Equality before the law: https://en.wikipedia.org/wiki/Equality_before_the_law :
> Article 7 of the Universal Declaration of Human Rights (UDHR) states: "All are equal before the law and are entitled without any discrimination to equal protection of the law". [1] Thus, it states that everyone must be treated equally under the law regardless of race, gender, color, ethnicity, religion, disability, or other characteristics, without privilege, discrimination or bias. The general guarantee of equality is provided by most of the world's national constitutions, [4] but specific implementations of this guarantee vary.
Equal Protection of Equal Rights.
Strict scrutiny iff: https://en.wikipedia.org/wiki/Strict_scrutiny
UDHR: Universal Declaration of Human Rights: https://www.un.org/en/about-us/universal-declaration-of-huma... :
> Article 7: All are equal before the law and are entitled without any discrimination to equal protection of the law. All are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination.
> Article 8: Everyone has the right to an effective remedy by the competent national tribunals for acts violating the fundamental rights granted him by the constitution or by law.
> Article 10: Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations and of any criminal charge against him.
> Article 16: Men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family. They are entitled to equal rights as to marriage, during marriage and at its dissolution.
UDHR: https://en.wikipedia.org/wiki/Universal_Declaration_of_Human...
(1) Equal Protection of (2) Equal Rights.
Inalienable right: https://en.wikipedia.org/wiki/Inalienable_right ( -> Natural rights and legal rights (in British Common law since the Magna Carta))
Life:
Liberty: https://en.wikipedia.org/wiki/Liberty :
> In the Constitutional law of the United States, ordered liberty means creating a balanced society where individuals have the freedom to act without unnecessary interference (negative liberty) and access to opportunities and resources to pursue their goals (positive liberty), all within a fair legal system.
Pursuit of Happiness:
Searching for small primordial black holes in planets, asteroids, here on Earth
https://news.ycombinator.com/item?id=42347561 (1 point)
Maybe the gravity hole in the Indian Ocean is due to a primordial black hole
A black hole would orbit the center of the earth, it wouldn't be lodged in place somewhere
/? What happens if a black hole hits Earth? https://www.google.com/search?q=What+Happens+If+A+Black+Hole...
Though, the same folks claim there could be a microscopic black hole every 30 km on or throughout earth
https://news.ycombinator.com/item?id=38452488 :
> PBS Spacetime estimates that there are naturally occurring microscopic black holes every 30 km on Earth. https://news.ycombinator.com/item?id=33483002
> [...]
> Isn't there a potential for net relative displacement and so thus couldn't (microscopic) black holes be a space drive?
How did software get so reliable without proof? (1996) [pdf]
"How Did Software Get So Reliable Without Proof?" (1996) : https://6826.csail.mit.edu/2020/papers/noproof.pdf
> Abstract: By surveying current software engineering practice, this paper reveals that the techniques employed to achieve reliability are little different from those which have proved effective in all other branches of modern engineering: rigorous management of procedures for design inspection and review; quality assurance based on a wide range of targeted tests; continuous evolution by removal of errors from products already in widespread use; and defensive programming, among other forms of deliberate over-engineering. Formal methods and proof play a small direct role in large scale programming; but they do provide a conceptual framework and basic under- standing to promote the best of current practice, and point directions for future improvement.
From "Extreme Programming" > History: https://en.wikipedia.org/wiki/Extreme_programming :
> He began to refine the development methodology used in the project [in 1996] and wrote a book on the methodology (Extreme Programming Explained, published in October 1999)
> Many extreme-programming practices have been around for some time; the methodology takes "best practices" to extreme levels. For example, the "practice of test-first development, planning and writing tests before each micro-increment" was used as early as NASA's Project Mercury, in the early 1960s.
Agile > History: https://en.wikipedia.org/wiki/Agile_software_development#His... :
> Iterative and incremental software development methods can be traced back as early as 1957, [10] with evolutionary project management [11][12] and adaptive software development [[ ASD ]] emerging in the early 1970s.
> During the 1990s, a number of lightweight software development methods evolved in reaction to the prevailing heavyweight methods (often referred to collectively as waterfall) that critics described as overly regulated, planned, and micromanaged. [15] These lightweight methods included: rapid application development (RAD), from 1991; [16][17] the unified process (UP) and dynamic systems development method (DSDM), both from 1994; Scrum, from 1995; Crystal Clear and extreme programming (XP), both from 1996; and feature-driven development (FDD), from 1997. Although these all originated before the publication of the Agile Manifesto, they are now collectively referred to as agile software development methods. [3]
> "Manifesto for Agile Software Development" [2001]
awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/
/? z3
From https://news.ycombinator.com/item?id=41944043 :
> - Does Z3 distinguish between xy+z and z+yx, in terms of floating point output? math.fma() would be a different function call with the same or similar output.
> (deal-solver is a tool for verifying formal implementations in Python with Z3.)
- re: property testing, fuzzing, formal verification: https://news.ycombinator.com/item?id=40884466 https://westurner.github.io/hnlog/#story-40875559
From "The Future of TLA+ [pdf]" (2024) https://news.ycombinator.com/item?id=41385141 :
> Formal methods including TLA+ also can't/don't prevent or can only workaround side channels in hardware and firmware that is not verified. But that's a different layer.
Things formal methods shouldn't be expected to find: Floating point arithmetic non-associativity, side-channels
> "Use of Formal Methods at Amazon Web Services" (2014) https://lamport.azurewebsites.net/tla/formal-methods-amazon....
>> This raised a challenge; how to convey the purpose and benefits of formal methods to an audience of software engineers? Engineers think in terms of debugging rather than ‘verification’, so we called the presentation “Debugging Designs” [8] . Continuing that metaphor, we have found that software engineers more readily grasp the concept and practical value of TLA+ if we dub it:
Exhaustively testable pseudo-code
> We initially avoid the words [...]Ask HN: Worst gravity and black hole art, and CG labeling guidelines for science
I think that computer graphic simulations of gravity and black holes should be labeled as simulations.
The same technology that we use to label or watermark genai image and video content could be used to label n-body gravity and black hole images.
"Is this a direct physical observation? Is this retouched; did they add color, did they refocus, etc? Is this a presumptively contrived simulation rendering that shouldn't have same strength of evidence as a less biased direct physical observation?"
Almost all images and videos of black holes are simulations just.
This likely affects our intuition. People try to fit models to observations. If the observations are conjecture but aren't so labeled, then the people have unfortunately biased intuition.
In the interest of progress in science, What are some of the worst black hole and gravity computer graphics you've seen?
How could science programming improve its citations in presenting simulated computer graphics as science evidence?
How could an "unproven science rendering labeling guideline" be politely and effectively shared with CG artists, physicists, and science content producers?
Sagittarius A*: https://en.wikipedia.org/wiki/Sagittarius_A\\\\* :
> Astronomers have been unable to observe Sgr A* in the optical spectrum because of the effect of 25 magnitudes of extinction (absorption and scattering) by dust and gas between the source and Earth. [26]
> [...]
> In May 2022, astronomers released the first image of the accretion disk around the horizon of Sagittarius A*, confirming it to be a black hole, using the Event Horizon Telescope, a world-wide network of radio observatories. [13] This is the second confirmed image of a black hole, after Messier 87's supermassive black hole in 2019. [14][15] The black hole itself is not seen; as light is incapable of escaping the immense gravitational force of a black hole," only nearby objects whose behavior is influenced by the black hole can be observed. The observed radio and infrared energy emanates from gas and dust heated to millions of degrees while falling into the black hole. [16]*
EHT: Event Horizon Telescope: https://en.wikipedia.org/wiki/Event_Horizon_Telescope
First EHT image of Sagittarius A*: (2017, 2022) https://commons.wikimedia.org/wiki/File:EHT_Saggitarius_A_bl...
/? How does the rotation of our solar system relate to the rotation of Sagittarius A* https://www.google.com/search?q=How+does+the+rotation+of+our...*
Milky Way > Astrography > Sun's location and neighborhood: https://en.wikipedia.org/wiki/Milky_Way#Sun's_location_and_n... :
> The apex of the Sun's way, or the solar apex, is the direction that the Sun travels through space in the Milky Way. The general direction of the Sun's Galactic motion is towards the star Vega near the constellation of Hercules,
> at an angle of roughly 60 sky degrees to the direction of the Galactic Center.
> The Sun's orbit about the Milky Way is expected to be roughly elliptical with the addition of perturbations due to the Galactic spiral arms and non-uniform mass distributions. In addition, the Sun passes through the Galactic plane approximately 2.7 times per orbit. [109] [unreliable source?] This is very similar to how a simple harmonic oscillator works with no drag force (damping) term.* [...]
> It takes the Solar System about 240 million years to complete one orbit of the Milky Way (a galactic year), [112] so the Sun is thought to have completed 18–20 orbits during its lifetime and 1/1250 of a revolution since the origin of humans.
> The orbital speed of the Solar System about the center of the Milky Way is approximately 220 km/s (490,000 mph) or 0.073% of the speed of light.
> The Sun moves through the heliosphere at 84,000 km/h (52,000 mph). At this speed, it takes around 1,400 years for the Solar System to travel a distance of 1 light-year, or 8 days to travel 1 AU (astronomical unit). [113] The Solar System is headed in the direction of the zodiacal constellation Scorpius, which follows the ecliptic. [114]
From an Answer to "Does our solar system orbit Sagittarius A (location of the SMBH)?" https://www.quora.com/Does-our-solar-system-orbit-Sagittariu... :
> [...] Think about this: the mass of the entire Milky Way galaxy is about 1 trillion solar masses; however, Sgr A is only about 4 million solar masses: about 0.0001% of the galaxy’s total mass. The force that is most responsible for keeping the solar system in orbit around the galaxy is not Sgr A, but the gravity of the galaxy itself.
> Need evidence? Consider the fact that as the Sun moves around the galaxy, it also bobs “above” and “below” the galactic plane. It’s orbital path resembles the edge of a warped record instead of a flat ellipse. This wouldn’t happen if the Sun were simply orbiting a point mass in the center of the galaxy, but it makes perfect sense if the Sun is being tugged “up” and then “down” by the galaxy’s collective gravity.
> So yes, the Sun is orbiting around a point corresponding to the location of Sgr A, but no, the Sun is not just orbiting Sgr A.
"[EHT Sagittarius A* (Sgr A*)] Milky Way black hole has 'strong, twisted' magnetic field in mesmerizing new [polarized (phase filtered)] image" (2024-03) https://www.npr.org/2024/03/28/1241403435/milky-way-black-ho...
"Astronomers unveil strong magnetic fields spiraling at the edge of Milky Way’s central black hole" (2024-03) https://www.eso.org/public/news/eso2406/
ScholarlyArticle: "First Sagittarius A* Event Horizon Telescope Results. VII. Polarization of the Ring" (2024) https://doi.org/10.3847/2041-8213/ad2df0 https://iopscience.iop.org/article/10.3847/2041-8213/ad2df0
ScholarlyArticle: "First Sagittarius A* Event Horizon Telescope Results. VIII.: Physical interpretation of the polarized ring" (2024) https://doi.org/10.3847/2041-8213/ad2df1 https://iopscience.iop.org/article/10.3847/2041-8213/ad2df1
"First image of our Milky Way's black hole [Sagittarius A*] may be inaccurate, scientists say" (2024-10) https://news.ycombinator.com/item?id=42087932 https://www.space.com/the-universe/black-holes/1st-image-of-... :
> "We hypothesize that the ring image resulted from errors during EHT's imaging analysis and that part of it was an artifact, rather than the actual astronomical structure."
> The ring-like structure, which really represents intense radio waves blasted from the brilliant disk of gas, known as the "accretion disk," that's swirling around the black hole, may in fact be more elongated than appears, the researchers argue
"An Independent Hybrid Imaging of Sgr A* from the Data in EHT 2017 Observations" (2024-10) https://doi.org/10.1093/mnras/stae1158 https://academic.oup.com/mnras/article/534/4/3237/7660988 :
> To mitigate the time variability in the Sgr A* structure, data weighting strategies derived from general relativistic magnetohydrodynamical (GRMHD) simulations with the source-size assumption are used. In addition, the EHTC analysis uses a unique criterion for selecting the final image. It is not based on consistency with the observed data, but rather on the highest rate of appearance of the image morphology from a wide imaging parameter space. Our concern is that these methods may interfere with the PSF deconvolution and cause the resulting image to reflect more structural features of the PSF than of the actual intrinsic source.
> On the other hand, our independent analysis used traditional very long baseline interferometry (VLBI) imaging techniques to derive our final image. We used the hybrid mapping method, which is widely accepted as the standard approach, namely iterations between imaging by the clean algorithm and calibrating the data by self-calibration, following well-established precautions. In addition, we performed a comparison with the PSF structure, noting the absence of distinct PSF features in the resulting images. Finally, we selected the image with the highest degree of consistency with the observational data."
> Our final image shows that the eastern half of this structure is brighter, which may be due to a Doppler boost from the rapid rotation of the accretion disc. We hypothesize that our image indicates that a portion of the accretion disc, located approximately 2 to several R_S (where R_S is the Schwarzschild radius) away from the black hole, is rotating at nearly 60% of the speed of light and is seen at an angle of 40°−45° (Section 4).
> general relativistic magnetohydrodynamical (GRMHD)
Magnetohydrodynamics; fluids and electricity: https://en.wikipedia.org/wiki/Magnetohydrodynamics
Magnetorotational instability: https://en.wikipedia.org/wiki/Magnetorotational_instability
SPH: 1970s
QHD: Quantum Hydrodynamics
SQS: Superfluid Quantum Space
SQG: Superfluid Quantum Gravity and SQR: Superfluid Quantum Relativity
Fedi's
> Schwarzschild radius
Does this image help to confirm or reject the Shwarzschild radius, or whether black holes have hair or turbulence at the boundary we've been calling the event horizon though Hawking radiation permeates through all such boundaries in at least some directions?
> which may be due to a Doppler boost from the rapid rotation of the accretion disc.
How much does the thermal energy vary within the (already physically distributed) matter surrounding the black hole?
Isn't there reason to believe that things are in a different phase of matter at those darker temperatures ? Are there superfluid states at those thermal ranges? Are our phase transition diagrams complete? Standard phase transition diagrams do not describe the Mpemba effect, for example. TODOcite. Does fusion plasma research have insight into the phases of matter at higher temperatures? What are the temperatures around the disturbance?
Does M87a show a similar Doppler effect as this analysis suggests affects the 2017 EHT Sgr A* image?
> Mpemba effect, phase transitions
And whether that matters for imaging of an accretion disc
> From "Exploring Quantum Mpemba Effects" (2024) https://physics.aps.org/articles/v17/10 :
> Similar processes, in which a system relaxes to equilibrium more quickly if it is initially further away from equilibrium, are being intensely explored in the microscopic world. Now three research teams provide distinct perspectives on quantum versions of Mpemba-like effects, emphasizing the impact of strong interparticle correlations, minuscule quantum fluctuations, and initial conditions on these relaxation processes [3–5]. The teams’ findings advance quantum thermodynamics
Is quantum thermodynamics modeled in the (GRMHD) magnetohydrodynamics applied?
From https://news.ycombinator.com/item?id=42369294#42371946 https://westurner.github.io/hnlog/#comment-42371946 :
> Dark fluid: https://en.wikipedia.org/wiki/Dark_fluid :
>> Dark fluid goes beyond dark matter and dark energy in that it predicts a continuous range of attractive and repulsive qualities under various matter density cases. Indeed, special cases of various other gravitational theories are reproduced by dark fluid, e.g. inflation, quintessence, k-essence, f(R), Generalized Einstein-Aether f(K), MOND, TeVeS, BSTV, etc. Dark fluid theory also suggests new models, such as a certain f(K+R) model that suggests interesting corrections to MOND that depend on redshift and density
> The same technology that we use to label or watermark genai image and video content could be used to label n-body gravity and black hole images.
"IPTC publishes metadata guidance for AI-generated “synthetic media”" (2023) https://iptc.org/news/iptc-publishes-metadata-guidance-for-a...
C2PA: Coalition for Content Provenance and Authenticity (C2PA) > Specifications > Attestation: https://c2pa.org/specifications/specifications/1.4/attestati...
Research of RAM data remanence times
From https://blog.3mdeb.com/2024/2024-12-13-ram-data-decay-resear... :
> The main goal is to determine the time required after powering off the platform for all the data to be irrecoverably lost from RAM.
Cold boot attack: https://en.wikipedia.org/wiki/Cold_boot_attack
Data remnance: https://en.wikipedia.org/wiki/Data_remanence
Ironically, quantum computers have the opposite problem; coherence times for QC qubits are still less than a second: there is not yet a way to store qubits for even one second.
(2024: 291±6µs, 2022: 20µs)
"Cryogenically frozen RAM bypasses all disk encryption methods" (2008) https://www.zdnet.com/article/cryogenically-frozen-ram-bypas...
And that's part of why secure enclaves and TPM.
From https://news.ycombinator.com/item?id=37099515 :
> If quantum information is never destroyed – and classical information is quantum information without the complex term i – perhaps our [RAM] states are already preserved in the universe; like reflections in water droplets in the quantum foam
According to the "Air gap malware" Wikipedia article's description of GSMem, the x86 RAM bus can be used as a low power 3G antenna. And, HDMI is an antenna, and, Ungrounded computers leak emanations to the ungrounded microphone port on the soundcard.
Air gap malware: https://en.wikipedia.org/wiki/Air-gap_malware
Stochastic forensics: https://en.wikipedia.org/wiki/Stochastic_forensics
Back in the day, we called that https://en.wikipedia.org/wiki/Van_Eck_phreaking
Room-temperature superconductivity in Bi-based superconductors
"Room-temperature superconductivity: Researchers uncover optical secrets of Bi-based superconductors" https://phys.org/news/2024-12-room-temperature-superconducti...
"Wavelength dependence of linear birefringence and linear dichroism of Bi2−xPbxSr2CaCu2O8+δ single crystals" (2024) https://www.nature.com/articles/s41598-024-78208-6
Paper title: Wavelength dependence of linear birefringence and linear dichroism of Bi2−xPbxSr2CaCu2O8+δ single crystals
[deleted]
Noninvasive imaging method can penetrate deeper into living tissue
Mary Lou Jepsen of openwater.cc [2] has managed to image neurons and other large cell structures deep inside living bodies by phase wave interferometry and descattering signals through human tissues, even through skull and bone[1] using CMOS imageing chips, lasers, ultrasonics and holographics. A thousand times cheaper fMRI. She aims to get to realtime million pixels moving picture resolutions of around a micron. Eventually she will be able to read and write neurons or vibrate and someday maybe burn cell tissues to destruction.
There are later presentations at the website where the technique is better described and visualized, but [1] is a good quick place to start and judge if you want to study their brainscanner in more depth. There are patents and a few scientific publications [3] that I'm aware of, but mostly many up to date talks with demonstrations by startup founder Mary Lou [4]. And recently she is open sourcing[5] parts of the hardware and software on github [6] so we can start building our own lab setups and improve the imageing software.
[1] https://www.youtube.com/watch?v=awADEuv5vWY
[2] https://www.openwater.health/
[3] https://scholar.google.com/citations?hl=en&user=5Ni7umEAAAAJ...
[4] https://www.youtube.com/watch?v=U_cHAH4T8Co
From the OpenwaterHealth/opw_neuromod_sw README: https://github.com/OpenwaterHealth/opw_neuromod_sw :
> open-LIFU uses an array to precisely steer the ultrasound focus to the target location, while its wearable small size allows transmission through the forehead into a precise spot location in the brain even while the patient is moving.
Does it use one beam of the array to waveguide another beam? For imaging or for treatment?
From "A simple technique to overcome self-focusing, filamentation, supercontinuum generation, aberrations, depth dependence and waveguide interface roughness using fs laser processing" (2017) https://www.nature.com/articles/s41598-017-00589-8 https://scholar.google.com/scholar?start=10&hl=en&as_sdt=5,4... :
> We show that all these effects can be significantly reduced if not eliminated using two coherent, ultrafast laser-beams through a single lens - which we call the Dual-Beam technique. Simulations and experimental measurements at the focus are used to understand how the Dual-Beam technique can mitigate these problems. The high peak laser intensity is only formed at the aberration-free tightly localised focal spot, simultaneously, suppressing unwanted nonlinear side effects for any intensity or processing depth. Therefore, we believe this simple and innovative technique makes the fs laser capable of much more at even higher intensities than previously possible, allowing applications in multi-photon processing, bio-medical imaging, laser surgery of cells, tissue and in ophthalmology, along with laser writing of waveguides.
Gah – CLI to install software from GitHub Releases
Software installers should check hashes and signatures before installing anything.
Doesn't GitHub support package repositories for signed packages?
https://slsa.dev/get-started explains about verifiable build provenance attestations:
> SLSA 2
> To achieve SLSA 2, the goals are to:
> - Run your build on a hosted platform that generates and signs provenance
> - Publish the provenance to allow downstream users to verify
> [...]
> SLSA 3
> To achieve SLSA 3, you must:
> - Run your build on a hosted platform that generates and signs provenance
> - Ensure that build runs cannot influence each other
> - Produce signed provenance that can be verified as authentic
And:
> For now, the convention is to keep the provenance attestation with your artifact. Though Sigstore is becoming more and more popular, the format of the provenance is currently tool-specific.
That's what' Aqua [0] does plus more!
From https://aquaproj.github.io/ :
> aqua installs tools securely. aqua supports Checksum Verification, Policy as Code, Cosign and SLSA Provenance, GitHub Artifact Attestations, and Minisign. Please see Security.
Show HN: Quantus – LeetCode for Financial Modeling
Hi everyone,
I wanted to share Quantus, a finance learning and practice platform I’m building out of my own frustration with traditional resources.
As a dual major in engineering and finance who started my career at a hedge fund, I found it challenging to develop hands-on financial modeling skills using existing tools. Platforms like Coursera, Udemy, Corporate Finance Institute (CFI), and Wall Street Prep (WSP) primarily rely on video-based tutorials. While informative, these formats often lack the dynamic, interactive, and repetitive practice necessary to build real expertise.
For example, the learning process often involves:
- Replaying videos multiple times to grasp key concepts.
- Constantly switching between tutorials and Excel files.
- Dealing with occasional discrepancies between tutorial numbers and the provided Excel materials.
To solve these problems, I created Quantus—an interactive platform where users can learn finance by trying out formulas or building financial models directly in an Excel-like environment. Inspired by LeetCode, the content is organized into three levels—easy, medium, and hard—making it accessible for beginners while still challenging for advanced users.
Our growing library of examples includes:
- 3-statement financial models
- Discounted Cash Flow (DCF) analysis
- Leveraged Buyouts (LBO)
- Mergers and Acquisitions (M&A)
Here’s a demo video to showcase the platform in action. https://www.youtube.com/watch?v=bDRNHgBERLQ
I’d love to hear your thoughts and feedback! Let me know what other features or examples you’d find useful.
How does your solution differ from JupyterHub + ottergrader or nbgrader? What about Kaggle Learn?
From taking the Udacity AI Programming in Python course, it looks like Udacity has a Jupyter Notebooks -based LMS LRS CMS platform.
How does your solution differ from the "Quant Platform" which hosts [Certificate in] "Python for Finance": https://home.tpq.io/pqp/
Do you award (OpenBadges) educational credentials as Blockcerts (W3C VC Verifiable Claims, W3C DIDs Decentralized Identifiers,) with blockchain-certificates/cert-issuer? https://github.com/blockchain-certificates/cert-issuer#how-b...
Our goal is to help users improve their finance and accounting knowledge. This includes acing job interviews and learning how to create different financial models for varied sectors. Currently, we have no plans to provide coding lessons or provide certificates.
Small Businesses vs. Corporations: What Tech Tools Are We Missing?
I was browsing Y Combinator's latest Request for Startups and one prompt caught my attention:
"Develop tools that help small businesses operate at the same level as large corporations"
This got me thinking: What critical operational capabilities are small businesses currently lacking?
What technological or strategic barriers prevent them from competing on equal footing with larger enterprises?
I'm curious to hear from entrepreneurs, small business owners, and tech enthusiasts:
What tools or systems do you wish existed?
What daily/weekly/monthly tasks feel unnecessarily complex or time-consuming?
What do large corporations do that seems out of reach for smaller organizations?
> What critical operational capabilities are small businesses currently lacking?
ERP / Accounting + Finance + Corporate Treasury integration (with ILP Interledger)
/? ERP accounting site:news.ycombinator.com site: github.com https://www.google.com/search?q=erp+accounting+site%3Anews.y...
/? ERP https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
/? SMB https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Security.
Having to pay for SAML and SCIM integration.
MDM and EDR.
Security baseline configuration deployments for different OS.
It’s a farce.
You mean the security in large organizations is a farce?
SAML/ SCIM Integration are often buggy or doesn't work as advertised..
MDM is just a circus in making, EDR can be easily bypassed...
Pentests are barely worth more than script kiddies even from well known and recognized vendors.
I am not even specialized in sec and it drives me crazy the amount of bypass/work around in IT organizations while pretending everything is well managed and design.
Re: IAM cost workarounds in SMBs, SAML / Oauth2/OIDC / LDAP:
From "Show HN: Skip the SSO Tax, access your user data with OSS" https://news.ycombinator.com/item?id=35529042 :
glim: https://github.com/doncicuto/glim
"Proxy LDAP to limit scope of access #60" https://github.com/doncicuto/glim/issues/60
glauth: https://github.com/glauth/glauth
slapd-sql: https://linux.die.net/man/5/slapd-sql
gitlab-ce-ldap-sync (PHP) https://github.com/Adambean/gitlab-ce-ldap-sync
Open Source SSO for SMB
"Launch HN: SSOReady (YC W24) – Making SAML SSO painless and open source" https://news.ycombinator.com/item?id=41110850 :
ssoready: https://github.com/ssoready/ssoready
Next-generation datacenters consume zero water for cooling
So MICROS~1 has finally discovered the wonders of heat pump systems? Great, what next, photovoltaics?
Thermophotovoltaics (TPV) and Thermoelectrics.
This is also a positive development in HPC sustainability:
"New HPE fanless cooling achieves a 90% reduction in cooling power consumption" (2024) https://news.ycombinator.com/item?id=42106715
[flagged]
Also, in a linked post [0], it looks like this still uses water in a closed loop heat exchange system. It just won't evaporate for cooling.
[0] https://www.microsoft.com/en-us/microsoft-cloud/blog/2024/07...
To be clear, I still consider this laudable and hope to change our house to use a heat exchange eventually, rather than a furnace plus air conditioner pair, per Technology Connections' advisement [1].
[1] https://www.youtube.com/watch?v=7J52mDjZzto 35 minutes
On YouTube, I've seen pools heated by immersion mining rigs and also pools heated by attic heat; but I've never heard of a "zero water datacenter".
Great work.
FWIU Type-III superconductors also have like no loss.
Is there a less corrosive sustainable thermal fluid or a water additive to prevent corrosion?
Is there (solid-state) thermoelectric heat energy reclamation?
Launch HN: Double (YC W24) – Index Investing with 0% Expense Ratios
Hi HN, we’re JJ and Mark, the founders of Double (https://double.finance). Double lets you invest in 50+ broad stock market indexes with 0% expense ratios. We handle all the management, including rebalancing and tax-loss harvesting—proactively selling losing stocks to potentially save on taxes—for $1/month.
Our goal is to bring the low fee trend pioneered by Robinhood to ETFs and mutual funds. We posted a Show HN about 3 months ago (https://news.ycombinator.com/item?id=41246686) and since then have crossed $10M in AUM (Assets Under Management) [1].
Here’s a demo video: https://www.loom.com/share/10c9150ce4114f278e8c249f211e7ec8. Please note that none of this content should be treated as financial advice.
Everyone knows that fees eat into your investing returns. Financial advisors generally charge 1% of AUM per year, and ETFs have a weighted average expense ratio of about 0.17%, although some go as low as 0.03% for VOO. Over a 30-year period on a $500k portfolio with $2k invested monthly, the money lost to those fees would be $1.30M for the financial advisor and $244k for the average ETF and even $42,951 for the low fee VOO.
Double lets you index invest without paying any percentage-based fees - we charge just $1/month. It works by buying the individual stocks that make up popular indexes. By buying the individual positions, we can also customize and tax-loss harvest your account, something ETFs or Mutual Funds cannot do.
Most ETFs and mutual funds today are not that complicated - they can be expressed as a CSV file with 2 columns - a ticker and a share number. You can find these holding csv files on most ETF pages (VOO[2], QQQ[3]). Right now there are about $9.1T of assets in ETFs[4] and $20T in Mutual Funds[5] in the US, with estimated revenue of $100B per year. We think this market is ripe for disruption.
We offer 50+ strategies that track popular ETFs and are updated as stocks merge and indexes change. You can customize these by weighting sectors or stocks differently, or even build your own indexes from scratch using our stock/etf screening tools. Once you've chosen your strategy, simply set your target weights and deposit funds (we also support transferring existing stocks). Our engine then checks your portfolio daily for potential trades, optimizes for tax-loss harvesting, portfolio tracking, and redeploys any generated cash.
I (JJ) started working on this after selling my last company. After using nearly every brokerage product out there and working with a financial advisor, I noticed a huge gap between the indexing capabilities of financial advisors and what individual investors could access. We wanted to bridge that gap and provide these powerful tools to everyone in a simple, low-cost way.
There are a number of robo-advisor products out there, but none that we know of offer direct indexing without expense ratios or AUM fees. One similar product is M1 Finance, but Double is more powerful. We offer tax-loss harvesting, a wider range of indexes, and greater customization. For example, when building your own index, you can set weights down to 0.1% (compared to M1's 1%) and even weight by market cap.
We also compete with robo-advisors like Wealthfront, but offer more control over your investments. And did I mention we don't charge AUM fees? You can see our strategies and play with the research page https://double.finance/p/explore without creating an account.
Over the past year we’ve learned a lot about the guts of building portfolio software. For example, stocks don’t really have persistent identifiers that are easy to model and pass around. We trade CUSIPs with our custodian Apex*, but these change all the time for stock splits or re-org’s that you would not think would lead to a new “stock”.
We’ve also learned a lot about how tax loss harvesting (TLH) is best implemented on large direct index portfolios using a factor model as opposed to pairs based replacements which I initially thought might be the way to execute these. We do use a pairs strategy on smaller sized strategies. And how TLH and portfolio optimization generally is best expressed as a linear optimization problem with competing objectives (tracking vs. tax alpha vs. trading costs for example).
If you have any thoughts on the product or our positioning as the low fee alternative I’d love to hear it. I think Robinhood has proved that you can build a strong business by getting rid of an industry wide cost (in their case commissions, in our expense ratios). We aim to do the same.
[1] https://www.axios.com/pro/fintech-deals/2024/12/10/direct-in... [2] https://investor.vanguard.com/investment-products/etfs/profi... [3] https://www.invesco.com/qqq-etf/en/about.html [4] https://fred.stlouisfed.org/series/BOGZ1LM564090005Q [5] https://fred.stlouisfed.org/series/BOGZ1LM654090000Q
* Edit to add a note on risk: If Double goes out of business, your assets are safe and held in your name at Apex Clearing. They have processes in place for these scenarios to help you access and transfer those assets. See more at https://news.ycombinator.com/item?id=42379135 below.
Ummm, have y'all thought about spread costs?
If you look at the spread of any of these ETF's mentioned (spread = ask px - bid px), you will notice that the spread is much smaller than if you were to sum up the spreads of each component stock.
That's possible because of a mature ecosystem of ETF market makers and arbitrageurs (like Jane Street).
If you buy all of the stocks individually, as it sounds like y'all's solution does, you will pay the spread cost for every. single. stock. The magnitude of these costs are not huge, but if we're comparing them against VOO's 17 bps/yr expense ratio, it's worth quantifying them.
I imagine eventually you can hope that market makers will be able to quote a tight spread on whatever the basket of stocks a client wants, but in the meantime, users would be bleeding money to these costs.
(Source: I work in market making and think about spreads more than I would like to admit.)
I just found this while researching for my other comment in this thread; re: "Fund of Funds Investment Agreements",
Would Rule 12d1-4 (2020) apply to holding funds versus holding individual stocks and/or ETFs? What about the 75-5-10 rule for mutual funds?
From https://www.klgates.com/SEC-Adopts-New-Rule-12d1-4-Overhauli... :
> Rule 12d1-4 will prohibit an acquiring fund and its “advisory group” from controlling, individually or in the aggregate, an acquired fund, except for an acquiring fund: (1) in the same fund group as the acquired fund; or (2) with a sub-adviser that also acts as adviser to the acquired fund. [4] Rule 12d1-4 requires an acquiring fund to aggregate its investment in an acquired fund with the investment of the acquiring fund’s advisory group to assess control
How does running index funds with 0% expense ratios differ from running diversified 401(k) funds?
Gusto payroll data can be synced with Guideline, which offers various 401(k) and IRA plans.
Are there All Weather or Golden Butterfly index funds?
Which well known index funds are weighted and which aren't? (This is probably not common knowledge, and might be useful for your pitch)
Given that you can't buy fractional shares, how and when are weighted indexes rebalanced to maintain the initial weight?
Like most funds, the S&P 500 index demonstrates Survivorship bias: underperformers are removed from the index, which thus is not a good indicator of total market performance over time.
From https://www.investopedia.com/articles/investing/030916/buffe... :
> Buffett's ultimately successful contention was that, including fees, costs and expenses, an S&P 500 index fund would outperform a hand-picked portfolio of hedge funds over 10 years. The bet pit two basic investing philosophies against each other: passive and active investing.
It's common for (cryptoasset) backtesting to have the S&P 500 as a benchmark. Weighted by market cap, the S&P 500 may or may not have higher returns than cryptoassets (for which there were not ETFs for so long).
Do you offer index fund backtesting; or, which performance and relative cost savings metrics do you track for each index fund?
How would a hypothetical index fund have performed during stress events, corrections, drawdowns, flash crashes, stress testing scenarios, and recessions; according to backtesting?
Do you offer fundamentals data?
Do you offer [GRI] sustainability report data to support portfolio/fund design?
Do you offer funds or index fund design with an emphasis on sustainability and responsible investing?
Can I generate an index fund to focus on one or more Sustainable Development Goals?
What is the difference between creating an index fund with you as compared with holding stocks in a portfolio and periodically rebalancing and reassessing?
IIUC in terms of cryptoassets:
- A (weighted and rebalanced) index fund is a collection of tokens.
- Each constituent stock or ETF could or may already be tokenized as a cryptoasset.
- A token is a string identifier for an asset. A token is a smart contract that has the necessary methods (satisfies the smart contract functional interface) to be exchanged over a cryptoasset network with cryptographic assurances.
- A "wrapped token" is wrapped to be listed on a different network. So, for example, if someone wanted to sell NASDAQ:AAPL on a different exchange or cryptoasset network they would need to wrap it and commit to an approved, on-file ETF fund management commitment that specifies how quickly they intend to buy or sell to keep the wrapped asset price close to the original asset's before-after-hours-trading market price.
- (ETFs typically have low to no fees. When you own an ETF you do not own voting shares; with ETFs, the fund owns the voting shares and votes on behalf of the ETF holders).
There are EIP and ERC standard specifications for bundles of assets; a token composed of multiple other tokens. A wallet may contain various types of fungible and non-fungible tokens. For wallet recovery and inheritance and estate planning, there's SSS, multisig transactions, multiple signature smart contracts, and Shamir backup, and banks can now legally hold cryptoassets for clients.
Alena Tensor in Unification Applications
"Alena Tensor in unification applications" (2024) https://iopscience.iop.org/article/10.1088/1402-4896/ad98ca
ALICE finds first ever evidence of the antimatter partner of hyperhelium-4
Does this actually prove that antimatter necessarily exists?
Does this prove that antimatter is necessary for theories of gravity to concur with other observations?
Do the observed properties of antimatter particles correspond with antimatter as the or a necessary nonuniform correction factor to theories of gravity?
My guess is that you are confusing "antimatter" and "dark matter".
If you want some antimmater, you can go to your nearby physics suply store and buy some radioactive material that produce positrons. It's quite easy. (Radioactive material may be dangerous. Don't fool with that!) If you want antiprotons or antihydrogen, you need a huge particle acelerator. They make plenty of antiprotons in the CERN, to make colisions. They are very difficult to store, so they survive a very short time on Earth.
Dark matter is very different. We have some experimental resuls that don't match the current physics theories. The current best guess is that there is some matter that we can't see for some reason. Nobody is sure what it is. Perhaps it's made of very dark big objects or perhaps it's made of tiny particles that don't interact with light. (I'm not sure the current favorite version in the area.) Anyway, some people don't like "dark matter" and prefer to change the theories, but the proposed new theories also don't match the experimental results.
> The current best guess is that there is some matter that we can't see for some reason. Nobody is sure what it is.
My pet conjecture (it's not detailed enough to be a hypothesis) is that this is related to the baryon asymmetry problem.
The antimatter symmetry problem is more than just baryons, despite the name, as we also have more electrons than positrons, not just more protons/neutrons than anti-protons/anti-neutrons.
There's a few possibilities:
1) the initial value just wasn't zero (an idea I heard from Sabine Hossenfelder)
2) the baryon number is violated in a process that requires conservation of charge
This would suggest antiprotons or antineutrons do something which involves the positron at the same time, so perhaps the anti-neutron is weirdly stable or something — neutron decay is a weak force process, and that can slightly violate the charge conjugation parity symmetry, so this isn't a completely arbitrary conjecture.
If we've got lots of (for example) surprise-stable anti-neutrons all over the place… it's probably not a perfect solution to the missing mass, but it's the right kind of magnitude to be something interesting to look at more closely.
3) the baryon number (proton/neutron/etc.) and/or lepton number (electron/positron/muon/etc.) is violated in a process that does not require conservation of charge.
If you have some combination of processes which don't each conserve charge, you're likely to get some net charge to the universe (unless the antiproton process just happens to occur at the same rate as the positron process); in quantum mechanics I understand such a thing is genuinely meaningless, while in GR this would contribute to the stress energy tensor in a way that looks kinda like dark energy.
But like I said, conjecture. I'm not skilled enough to turn this into a measurable hypothesis.
What about virtual particles, too?
Virtual particles: https://en.wikipedia.org/wiki/Virtual_particle :
> As a consequence of quantum mechanical uncertainty, any object or process that exists for a limited time or in a limited volume cannot have a precisely defined energy or momentum. For this reason, virtual particles – which exist only temporarily as they are exchanged between ordinary particles – do not typically obey the mass-shell relation; the longer a virtual particle exists, the more the energy and momentum approach the mass-shell relation.
Virtual particles are the generalisation to all quantum fields of what near-field is in radio to just photons. It's where you don't really benefit much from even calling things "particles" in the first place, because the wave function itself is a much better description.
Unfortunately the maths of QM doesn't play nice with the maths of GR, which is also why zero-point effects are either renormalised to exactly zero or otherwise predict an effect 10^122 times larger than observed.
> Does this actually prove that antimatter necessarily exists?
Antimatter definitely exists, it is detectable, and used; e.g, PET scans use positrons (anti-electrons), and there have been experiments (only in animal models last I knew) with anti-proton radiotherapy for cancers.
This is the first evidence of a particular configuration of antimatter, not the first evidence of antimatter.
To be more specific this is the first time we have detected hyper-antimatter of Helium where one of the quarks in the nucleus is an anti-strange quark (an anti-lambda from the article)
Thank you all for corrections and clarifications.
I had confused Antimatter in particle theory (where there is no gravity) and Dark matter, which has no explanation in particle theory and maybe probably shouldn't be necessary for a unified model that describes n-body gravity at astrophysical and particle scales.
> [dark matter] has no explanation in particle theory
From https://news.ycombinator.com/item?id=42369294#42371561 :
> One theory of dark matter is that it's strange quark antimatter.
You're confusing antimatter with dark matter.
With a splash of antimass in that confusion too
My mistake, my mistake.
It seems I had my Antimatter confused with mah Dark matter.
Antimatter: https://en.wikipedia.org/wiki/Antimatter
Dark matter: https://en.wikipedia.org/wiki/Dark_matter
Antimass:
I and the Internet have never heard of antimass.
Negative mass: https://en.wikipedia.org/wiki/Negative_mass
Dark energy: https://en.wikipedia.org/wiki/Dark_energy
Dark fluid: https://en.wikipedia.org/wiki/Dark_fluid :
> Dark fluid goes beyond dark matter and dark energy in that it predicts a continuous range of attractive and repulsive qualities under various matter density cases. Indeed, special cases of various other gravitational theories are reproduced by dark fluid, e.g. inflation, quintessence, k-essence, f(R), Generalized Einstein-Aether f(K), MOND, TeVeS, BSTV, etc. Dark fluid theory also suggests new models, such as a certain f(K+R) model that suggests interesting corrections to MOND that depend on redshift and density
Negative mass was what I was going for with "antimass". For some reason I thought that was a common term but I guess it is not.
Not sure why I confused the terms.
FWIU this Superfluid Quantum Gravity rejects dark matter and/or negative mass in favor of supervaucuous supervacuum, but I don't think it attempts to predict other phases and interactions like Dark fluid theory?
From "Show HN: Physically accurate black hole simulation using your iPhone camera" https://news.ycombinator.com/item?id=42191692 :
> Ctrl-F Fedi , Bernoulli, Gross-Pitaevskii:
>> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
There's a newer paper on it.
Alternatives to general relativity > Testing of alternatives to general relativity: https://en.wikipedia.org/wiki/Alternatives_to_general_relati...
The new Sagittarius* black hole image with phase might help with discarding models unsupported by evidence. Are those knots or braids or fields around a vortical superfluidic attractor system? There doesn't at all appear to be a hard boundary Schwarzschild radius.
But that's about not dark matter not antimatter.
Approximating mathematical constants using Minecraft
Wonder if the authors are familiar with https://www.luanti.org/. Just as "fun" surely, and less nudging kids into the jaws of the proprietary toll-extracting behemoths of our world.
Luanti (formerly minetest) is open source, written in Lua, has an API with mediawiki docs, and has a vscode extension: https://github.com/minetest/minetest
Luanti wiki > Modding intro: https://dev.minetest.net/Modding_Intro
SensorCraft is open source, written in Python, and is built on the Pyglet OpenGL library (which might compile to WASM someday) https://github.com/AFRL-RY/SensorCraft
/? Minecraft clone typescript site:github.com https://www.google.com/search?q=minecraft+clone+typescript+s...
/?hnlog minecraft:
https://news.ycombinator.com/item?id=32657066 :
> [ mcpi, pytest-minecraft, sensorcraft, ]
mcpi supports "Minecraft: Pi edition" and "RaspberryJuice" (~ Minecraft API on Bukkit server)
minecraft-pi-reborn is an open source rewrite of "Minecraft: Pi edition" that's written in C++. It looks like there's a glfw.h header in libreborn. Emscripten-glfw is a port of glfw which can be compiled to WASM. Glfw wraps OpenGL. Browsers have WebGL.
MCPI-Revival/minecraft-pi-reborn: https://github.com/MCPI-Revival/minecraft-pi-reborn docs: https://github.com/MCPI-Revival/minecraft-pi-reborn/blob/mas...
If you ask a random kid if they know what luanti is, they would have no clue.
Read the docs / documentation / API docs (and contribute)
Read the source
Write tests with test assertions and run them with a test runner
Learnxinyminutes > Lua: https://learnxinyminutes.com/docs/lua/
Learnxinyminutes > Python: https://learnxinyminutes.com/docs/python/
FWIW 22/7 and 666/212 are Rational approximations for pi.
You can write a script to find other integer/integer approximations for pi and other transcendental numbers like e.
Hah! There's a whole Wiki page[1] about pi approximations!
Transcendental number > Numbers proven to be transcendental: https://en.wikipedia.org/wiki/Transcendental_number#Numbers_... :
> [ π, e^a where a>0, ln(a) where a!=0 and a!=1, ]
e (Euler) https://en.wikipedia.org/wiki/E_(mathematical_constant)
Pi (Archimedes,) https://en.wikipedia.org/wiki/Pi
Avogadro constant: https://en.wikipedia.org/wiki/Avogadro_constant
Approximations of Pi: https://en.wikipedia.org/wiki/Approximations_of_%CF%80
List of representations of e: https://en.wikipedia.org/wiki/List_of_representations_of_e
Four dimensional space, 4D, Minkowski (SR), Einstein (GR) https://en.wikipedia.org/wiki/Four-dimensional_space
Four vector: https://en.wikipedia.org/wiki/Four-vector
Four velocity: https://en.wikipedia.org/wiki/Four-velocity
Four gradient: https://en.wikipedia.org/wiki/Four-gradient
List of mathematical constants: https://en.wikipedia.org/wiki/List_of_mathematical_constants
Searching for small primordial black holes in planets, asteroids and on Earth
ScholarlyArticle: "Searching for small primordial black holes in planets, asteroids and here on Earth" (2024) https://www.sciencedirect.com/science/article/abs/pii/S22126...
NewsArticle: "Ancient black holes could dig tiny tunnels in rocks and buildings" (2024) https://newatlas.com/physics/primordial-black-holes-tiny-tun...
RNA-targeting CRISPR reveals that noncoding RNAs are not 'junk'
Constraints Are Good: Python's Metadata Dilemma
Eggs are dying out, pointed out by this 2 year old blog post:
https://about.scarf.sh/post/python-wheels-vs-eggs
The metadata problem is related to the problem that pip had an unsound resolution algorithm based on "try to resolve something optimistically and hope it works when you get stuck and try to backtrack".
I did a lot of research along the line that led to uv 5 years ago and came to the conclusion that installing out of wheels you can set up a SMT problem the same way maven does and solve it right the first time. They had a PEP to publish metadata files for wheels in PyPi but I'd built something before that could suck the metadata out of a wheel with just 3 http range requests. I believed that any given project might depend on a legacy egg and in those cases you can build that egg into a wheel via a special process and store it in a private repo (a must for the perfect Python build system)
The metadata problem is unrelated to eggs. Eggs haven’t played much of a role in a long time but the metadata system still exists.
Range requests are used by both uv and pip if the index supports it, but they have to make educated guesses about how reliable that metadata is.
The main problem are local packages during development and source distributions.
Back in the case of eggs you couldn't count on having the metadata until you ran setup.py which forced pip to be unreliable because so much stuff got installed and uninstalled in the process of a build.
There is a need for a complete answer for dev and private builds, I'll grant that. Private repos like we are used to in maven would help.
Eggs did not contain setup.py files. The metadata like for wheels was embedded in the egg (in the EGG-INFO folder) from my recollection. Eggs were zip importable after all.
It looks like you can embed dependency data in an egg but it is also true that (1) a lot of eggs have an internal setup.py that does things like compile C code, and (2) it's a hassle for developers on some platforms who might not have the right C installed, and (3) eggs reserve the right to decide what dependencies they include when they are installed based on the environment and such.
I decided to look into how this works these days.
These days you need toml to parse pyproject.toml, and there's not a parser in the Python standard library for TOML: https://packaging.python.org/en/latest/guides/writing-pyproj...
pip's docs strongly prefer project.toml: https://pip.pypa.io/en/stable/reference/build-system/pyproje...
Over setup.py's setup(,setup_requires=[], install_requires=[]) https://pip.pypa.io/en/stable/reference/build-system/setup-p...
Blaze and Bazel have Skylark/Starlark to support procedural build configuration with maintainable conditionals
Bazel docs > Starlark > Differences with Python: https://bazel.build/rules/language
cibuildwheel: https://github.com/pypa/cibuildwheel ;
> Builds manylinux, musllinux, macOS 10.9+ (10.13+ for Python 3.12+), and Windows wheels for CPython and PyPy;
manylinux used to specify a minimum libc version for each build tag like manylinux2 or manylinux2014; pypa/manylinux: https://github.com/pypa/manylinux#manylinux
A manylinux_x_y wheel requires glibc>=x.y. A musllinux_x_y wheel requires musl libc>=x.y; per PEP 600: https://github.com/mayeut/pep600_compliance#distro-compatibi...
> Works on GitHub Actions, Azure Pipelines, Travis CI, AppVeyor, CircleCI, GitLab CI, and Cirrus CI;
Further software supply chain security controls: SLSA.dev provenance, Sigstore, and the new PyPI attestations storage too
> Bundles shared library dependencies on Linux and macOS through `auditwheel` and `delocate`
delvewheel (Windows) is similar to auditwheel (Linux) and delocate (Mac) in that it copies DLL files into the wheel: https://github.com/adang1345/delvewheel
> Runs your library's tests against the wheel-installed version of your library
Conda runs tests of installed packages;
Conda docs > Defining metadata (meta.yaml) https://docs.conda.io/projects/conda-build/en/latest/resourc... :
> If this section exists or if there is a `run_test.[py,pl,sh,bat,r]` file in the recipe, the package is installed into a test environment after the build is finished and the tests are run there.
Things that support conda meta.yml declarative package metadata: conda and anaconda, mamba and mambaforge, picomamba and emscripten-forge, pixi / uv, repo2docker REES, and probably repo2jupyterlite (because jupyterlite's jupyterlite-xeus docs mention mamba but not yet picomamba) https://jupyterlite.readthedocs.io/en/latest/howto/configure...
The `setup.py test` command has been removed: https://github.com/pypa/setuptools/issues/1684
`pip install -e .[tests]` expects extras_require['tests'] to include the same packages as the tests_require argument to setup.py: https://github.com/pypa/setuptools/issues/267
TODO: is there a new one command to run tests like `setup.py test`?
`make test` works with my editor. A devcontainers.json can reference a Dockerfile that runs something like this:
python -m ensurepip && python -m pip install -U pip setuptools
But then still I want to run the tests of the software with one command.Are you telling me there's a way to do an HTTPS Content Range request for the toml file in a wheel for the package dependency version constraints and/or package hashes (but not GPG pubkey fingerprints to match .asc manifest signature) and the build & test commands, but you still need an additional file in addition to the TOML syntax pyproject.toml like Pipfile.lock or poetry.lock to store the hashes for each ~bdist wheel on each platform, though there's now a -c / PIP_CONSTRAINT option to specify an additional requirements.txt but that doesn't solve for windows or mac only requirements in a declarative requirements.txt? https://pip.pypa.io/en/stable/user_guide/#constraints-files
conda supports putting `[win]` at the end of a YAML list item if it's for windows only.
Re: optimizing builds for conda-forge (and PyPI (though PyPI doesn't build packages (when there's a new PR, and then sign each build for each platform))) https://news.ycombinator.com/item?id=41306658
Debian opens a can of username worms
Perhaps it's time to agree upon how to Unicode in identifiers? The normalization, unprintable characters, confusing characters with same glyphs, etc. It's obviously problematic when everyone is doing it on their own.
Unicode has provided a specification for Unicode identifiers since 2005: https://www.unicode.org/reports/tr31/
Great! Is there a library for their validation? ICU seems to have only spoof checker for confusables.
ICU: International Components for Unicode: https://en.wikipedia.org/wiki/International_Components_for_U...
unicode-org/icu: https://github.com/unicode-org/icu
Microsoft/ICU: https://github.com/microsoft/icu
IDN: Internationalized domain name: https://en.wikipedia.org/wiki/Internationalized_domain_name
Punycode: https://en.wikipedia.org/wiki/Punycode
IDN homograph attack: https://en.wikipedia.org/wiki/IDN_homograph_attack
CWE-1007: Insufficient Visual Distinction of Homoglyphs Presented to User: https://cwe.mitre.org/data/definitions/1007.html
GNU libidn/libidn2: https://gitlab.com/libidn/libidn2
Comparison of regular expression engines > Language features > Part 2; Unicode support: https://en.wikipedia.org/wiki/Comparison_of_regular_expressi...
libu8ident
rurban/libu8ident : https://github.com/rurban/libu8ident :
> unicode security guidelines for identifiers
Terabit-scale high-fidelity diamond data storage
"Terabit-scale high-fidelity diamond data storage" (2024) https://www.nature.com/articles/s41566-024-01573-1
"Record-breaking diamond storage can save data for millions of years" (2024) https://www.newscientist.com/article/2457948-record-breaking... :
> Previous techniques have also used laser pulses to encode data into diamonds, but the higher storage density afforded by the new method means a diamond optical disc with the same volume as a standard Blu-ray could store approximately 100 terabytes of data – the equivalent of about 2000 Blu-rays – while lasting far longer than a typical Blu-ray’s lifetime of just a few decades.
"Optical data storage breakthrough increases capacity of diamonds by circumventing the diffraction limit" (2023) https://phys.org/news/2023-12-optical-storage-breakthrough-c... :
> [...] This is possible by multiplexing the storage in the spectral domain.
> "It means that we can store many different images at the same place in the diamond by using a laser of a slightly different color to store different information into different atoms in the same microscopic spots," said Delord, a postdoctoral research associate at CCNY. "If this method can be applied to other materials or at room temperature, it could find its way to computing applications requiring high-capacity storage." [...]
> "What we did was control the electrical charge of these color centers very precisely using a narrow-band laser and cryogenic conditions," explained Delord. "This new approach allowed us to essentially write and read tiny bits of data at a much finer level than previously possible, down to a single atom."
"Reversible optical data storage below the diffraction limit" (2023) https://www.nature.com/articles/s41565-023-01542-9
What about storing quantum bits (or qutrits or qudits) in diamond though?
"Ask HN: Can qubits be written to crystals as diffraction patterns?" (2023) https://news.ycombinator.com/item?id=38501668 :
> Aren't diffraction patterns due to lattice irregularities effective wave functions?
> Presumably holography would have already solved for quantum data storage if diffraction is a sufficient analog [of] a wave function?
"All-optical complex field imaging using diffractive processors" (2024) https://news.ycombinator.com/item?id=40541785 ; How to capture phase and intensity with a camera, Huygens-Steiner
Compressing Large Language Models Using Low Rank and Low Precision Decomposition
ScholarlyArticle: "Compressing Large Language Models Using Low Rank and Low Precision Decomposition" (2024) https://arxiv.org/abs/2405.18886 :
> CALDERA
NewsArticle: "Large language models can be squeezed onto your phone — rather than needing 1000s of servers to run — after breakthrough" (2024) https://www.livescience.com/technology/artificial-intelligen... :
Non-Contact Biometric System for Early Detection of Hypertension and Diabetes
"Non-Contact Biometric System for Early Detection of Hypertension and Diabetes Using AI and RGB Imaging" (2024) https://www.ahajournals.org/doi/10.1161/circ.150.suppl_1.413...
From https://www.notebookcheck.net/University-of-Tokyo-researcher... :
> optical, non-contact method to detect [blood pressure, hypertension, blood glucose levels, and diabetes]. A multispectral [UV + IR] camera was used and pointed at trial participants' faces to detect variations in the blood flow, which was further analyzed by AI software.
From https://news.ycombinator.com/item?id=41381977 :
> "Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
>> This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%
AI helps researchers dig through old maps to find lost oil and gas wells
It seems to me this approach could also be used for other aspects of mining. For instance, in Australia where I live there are many old gold, opal and other mines that have long been forgotten but which remain dangerous.
Most are unlikely to emit toxic or greenhouse gasses but they're nevertheless still dangerous because they're often very deep vertical shafts that a person could stumble across and fall in. These old mines were likely closed over when they were abandoned but often their closures/seals were made of wood that has probably rotted away over the past century or so.
It stands to reason that AI would be just as effective in this situation.
Is securing old mines a good job for (remotely-operated (humanoid)) robots?
Old mines can host gravitational energy storage.
From https://news.ycombinator.com/item?id=35778721 :
> FWIU we already have enough abandoned mines in the world to do all of our energy storage needs?
"Gravity batteries: Abandoned mines could store enough energy to power ‘the entire earth’" (2023) https://www.euronews.com/green/2023/03/29/gravity-batteries-...
Nearly half of teenagers globally cannot read with comprehension
Comprehension is a higher bar than "literacy". Global adult literacy rates have been drastically improving since the 1950s (https://ourworldindata.org/literacy). Today, global adult literacy rate is estimated at ~87%.
I -think- this is the appropiate interactive plot: https://ourworldindata.org/grapher/share-of-children-reachin...
The controls are annoying, but world level data only goes back to 2000:
* 2000: 38%
* 2005: 41%
* 2010: 45%
* 2015: 47%
* 2019: 48.5% (this is the closest point)
SDG reports have literacy stats,
Goal 4: "Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all" https://sdgs.un.org/goals/goal4#targets_and_indicators
Target 4.6: By 2030, ensure that all youth and a substantial proportion of adults, both men and women, achieve literacy and numeracy
Indicator 4.6.1: Proportion of population in a given age group achieving at least a fixed level of proficiency in functional (a) literacy and (b) numeracy skills, by sex
SDG4: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4
SDG4 > Target 4.6: Universal literacy and numeracy: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4...
The UN Stats 2024 Statistical Annex says there's no data for 4.6.1 since 2020? https://unstats.un.org/sdgs/files/report/2024/E_2024_54_Stat...
SDG e-Handbook re: Target 4.6.1: https://unstats.un.org/wiki/plugins/servlet/mobile?contentId...
World Bank data series: SE.ADT.LITR.ZS, https://databank.worldbank.org/metadataglossary/millennium-d...
FRED Series > Literacy Rate, Adult Total for the World (SEADTLITRZSWLD https://fred.stlouisfed.org/series/SEADTLITRZSWLD
There is no Literacy Rate data for the United States in World Bank or in FRED according to a quick search.
Literacy in the United States: https://en.wikipedia.org/wiki/Literacy_in_the_United_States
The National Literacy Institute > Literacy Statistics 2024 - 2025 : https://www.thenationalliteracyinstitute.com/post/literacy-s...
> On average, 79% of U.S. adults nationwide are literate in 2024.
> 21% of adults in the US are illiterate in 2024.
> 54% of adults have a literacy below a 6th-grade level (20% are below 5th-grade level).
> Low levels of literacy costs the US up to 2.2 trillion per year.
But what about comprehension?
Reading comprehension: https://en.wikipedia.org/wiki/Reading_comprehension
Intel announces Arc B-series "Battlemage" discrete graphics with Linux support
Why don't they just release a basic GPU with 128GB RAM and eat NVidia's local generative AI lunch? The networking effect of all devs porting their LLMs etc. to that card would instantly put them as a major CUDA threat. But beancounters running the company would never get such an idea...
Just how "basic" do you think a GPU can be while having the capability to interface with that much DRAM? Getting there with GDDR6 would require a really wide memory bus even if you could get it to operate with multiple ranks. Getting to 128GB with LPDDR5x would be possible with the 256-bit bus width they used on the top parts of the last generation, but would result in having half the bandwidth of an already mediocre card. "Just add more RAM" doesn't work the way you wish it could.
> "Just add more RAM" doesn't work the way you wish it could.
Re: Tomasulo's algorithm the other day: https://news.ycombinator.com/item?id=42231284
Cerebras WSE-3 has 44 GB of on-chip SRAM per chip and it's faster than HBM. https://news.ycombinator.com/item?id=41702789#41706409
Intel has HBM2e off-chip RAM in Xeon CPU Max series and GPU Max;
What is the difference between DDR, HBM, and Cerebras' 44GB of on-chip SRAM?
How do architectural bottlenecks due to modified Von Neumann architectures' debuggable instruction pipelines limit computational performance when scaling to larger amounts of off-chip RAM?
Tomasulo's algorithm also centralizes on a common data bus (the CPU-RAM data bus) which is a bottleneck that must scale with the amount of RAM.
Can in-RAM computation solve for error correction without redundant computation and consensus algorithms?
Can on-chip SRAM be built at lower cost?
Von Neumann architecture: https://en.wikipedia.org/wiki/Von_Neumann_architecture#Von_N... :
> The term "von Neumann architecture" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus). This is referred to as the von Neumann bottleneck, which often limits the performance of the corresponding system. [4]
> The von Neumann architecture is simpler than the Harvard architecture (which has one dedicated set of address and data buses for reading and writing to memory and another set of address and data buses to fetch instructions).
Modified Harvard architecture > Comparisons: https://en.wikipedia.org/wiki/Modified_Harvard_architecture
C-RAM: Computational RAM > DRAM-based PIM Taxonomy, See also: https://en.wikipedia.org/wiki/Computational_RAM
SRAM: Static random-access memory https://en.wikipedia.org/wiki/Static_random-access_memory :
> Typically, SRAM is used for the cache and internal registers of a CPU while DRAM is used for a computer's main memory.
For whatever reason Hynix hasn't turned their PIM into a usable product. LPDDR based PIM is insanely effective for inference. I can't stress this enough. An NPU+LPDDR6 PIM would kill GPUs for inference.
How many TOPS/W and TFLOPS/W? (T [Float] Operations Per Second per Watt (hour *?))
/? TOPS/W and FLOPS/W: https://www.google.com/search?q=TOPS%2FW+and+FLOPS%2FW :
- "Why TOPS/W is a bad unit to benchmark next-gen AI chips" (2020) https://medium.com/@aron.kirschen/why-tops-w-is-a-bad-unit-t... :
> The simplest method therefore would be to use TOPS/W for digital approaches in future, but to use TOPS-B/W for analogue in-memory computing approaches!
> TOPS-8/W
> [ IEEE should spec this benchmark metric ]
- "A guide to AI TOPS and NPU performance metrics" (2024) https://www.qualcomm.com/news/onq/2024/04/a-guide-to-ai-tops... :
> TOPS = 2 × MAC unit count × Frequency / 1 trillion
- "Looking Beyond TOPS/W: How To Really Compare NPU Performance" (2023) https://semiengineering.com/looking-beyond-tops-w-how-to-rea... :
> TOPS = MACs * Frequency * 2
> [ { Frequency, NNs employed, Precision, Sparsity and Pruning, Process node, Memory and Power Consumption, utilization} for more representative variants of TOPS/W metric ]
Is this fast enough for DDR or SRAM RAM? "Breakthrough in avalanche-based amorphization reduces data storage energy 1e-9" (2024) https://news.ycombinator.com/item?id=42318944
Breakthrough in avalanche-based amorphization reduces data storage energy 1e-9
ScholarlyArticle: "Electrically driven long-range solid-state amorphization in ferroic In2Se3" (2024) https://www.nature.com/articles/s41586-024-08156-8
NewsArticles:
"'Accidental discovery' creates candidate for universal memory — a weird semiconductor that consumes a billion times less power" (2024) https://www.livescience.com/technology/computing/accidental-...
"Advances in energy-efficient avalanche-based amorphization could revolutionize data storage" (2024) https://techxplore.com/news/2024-11-advances-energy-efficien...
"Self shocks turn crystal to glass at ultralow power density" (2024) https://www.eurekalert.org/news-releases/1063944
[deleted]
Neuroevolution of augmenting topologies (NEAT algorithm)
NEAT: https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t...
NEAT > See also links to EANT.
EANT: https://en.wikipedia.org/wiki/Evolutionary_acquisition_of_ne... :
> Evolutionary acquisition of neural topologies (EANT/EANT2) is an evolutionary reinforcement learning method that evolves both the topology and weights of artificial neural networks. It is closely related to the works of Angeline et al. [1] and Stanley and Miikkulainen. [2 [NEAT (2002)] Like the work of Angeline et al., the method uses a type of parametric mutation that comes from evolution strategies and evolutionary programming (now using the most advanced form of the evolution strategies CMA-ES in EANT2), in which adaptive step sizes are used for optimizing the weights of the neural networks. Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution path.
(in Hilbert space)
> [EANT] introduces a genetic encoding called common genetic encoding (CGE) that handles both direct and indirect encoding of neural networks within the same theoretical framework. The encoding has important properties that makes it suitable for evolving neural networks:
> It is complete in that it is able to represent all types of valid phenotype networks.
> It is closed, i.e. every valid genotype represents a valid phenotype. (Similarly, the encoding is closed under genetic operators such as structural mutation and crossover.)
> These properties have been formally proven. [3]
Neither MOSES: Meta-Optimizing Semantic Evolutionary Search nor PLN: Probabilistic Logic Networks are formally proven FWIU.
/? Z3 ... https://news.ycombinator.com/item?id=41944043 :
> Does [formal verification [with Z3]] distinguish between xy+z and z+yx, in terms of floating point output?
> math.fma() would be a different function call with the same or similar output.
> (deal-solver is a tool for verifying formal implementations in Python with Z3.)
EANT2 looks really cool, thanks for the link.
Show HN: Shiddns a DNS learning experience in Python
Hi HN, as a learning experience, I created a small DNS resolver with blocking capabilities and resolution of local host names. There are already tons of good ad-blocking DNS implementations out there. And they do this much better than my implementation. Great examples are Blocky or pihole.
So when I set out to "play around", I looked at these projects and tried to modify them. Fortunately for them, but unfortunately for beginners, these are already quite far developed and complex projects. So impatient me set out to "quickly write something in Python". Almost two weekends later, I put in so much time that I thought I might as well share this for others to explore (and hopefully enjoy). I tried keeping things simple and without external dependencies. In the end it was a good experience but became much bigger than I intended.
So I hope you enjoy: https://github.com/vvm2/shiddns
It looks like dnspython has DNSSEC, DoH, and DoQ support: test_dnssec.py: https://github.com/rthalley/dnspython/blob/main/tests/test_d... , dnssec.py: https://github.com/rthalley/dnspython/blob/main/dns/dnssec.p...
man delv
thank you for linking this, i took a peek and i will look at the DNSSEC pieces in more detail. this is something i did not dare to touch when i saw the RFC jungle around DNS.
Why CT Certificate Transparency logs are not possible by logging DNS record types like CERT, OPENPGPKEY, SSHFP, CAA, RRSIG, NSEC3; ACMEv2 Proof of Domain Control; and why we need a different system for signing software package build artifacts built remotely (smart contracts, JWS, SLSA, TUF, W3C Verifiable Credentials, blockcerts and transaction fees,)
Single-chip photonic deep neural network with forward-only training
"Single-chip photonic deep neural network with forward-only training" (2024) https://www.nature.com/articles/s41566-024-01567-z :
> Here we realize such a system in a scalable photonic integrated circuit that monolithically integrates multiple coherent optical processor units for matrix algebra and nonlinear activation functions into a single chip. We experimentally demonstrate this fully integrated coherent optical neural network architecture for a deep neural network with six neurons and three layers that optically computes both linear and nonlinear functions [...]
> This work lends experimental evidence to theoretical proposals for in situ training, enabling orders of magnitude improvements in the throughput of training data. Moreover, the fully integrated coherent optical neural network opens the path to inference at nanosecond latency and femtojoule per operation energy efficiency.
> femtojoule per operation energy efficiency
From "Transistor for fuzzy logic hardware: promise for better edge computing" https://news.ycombinator.com/item?id=42166303 :
> From "A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
>> Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.
Performance per watt: https://en.wikipedia.org/wiki/Performance_per_watt
https://news.ycombinator.com/item?id=41702789#41706409 :
> 5nm process
"Ask HN: Can qubits be written to crystals as diffraction patterns?" https://news.ycombinator.com/item?id=38501668
Notes from "Single atom defect in 2D material can hold quantum information at room temp" (2024) https://news.ycombinator.com/item?id=40478219 re Qubit data storage (quantum wave transmission through spacetime)
"Reversible optical data storage below the diffraction limit" (2024) https://news.ycombinator.com/item?id=38528877 :
> multiplexing Diamond with different color beams , "Real-space nanophotonic field manipulation using non-perturbative light–matter coupling" (2023) ; below the Abbe diffraction limit optional tweezers at 1/50 the photonic wavelength
"DNA-folding nanorobots can manufacture limitless copies of themselves" https://news.ycombinator.com/item?id=38569474
Phase from Intensity because Huygens' classical oscillatory model applies to photons;
From "Physicists use a 350-year-old theorem to reveal new properties of light waves" https://news.ycombinator.com/item?id=37226160 :
>> [..] Qian's team interpreted the intensity of a light as the equivalent of a physical object's mass, then mapped those measurements onto a coordinate system that could be interpreted using Huygens' mechanical theorem. "Essentially, we found a way to translate an optical system so we could visualize it as a mechanical system, then describe it using well-established physical equations," explained Qian.
>> Once the team visualized a light wave as part of a mechanical system, new connections between the wave's properties immediately became apparent—including the fact that entanglement and polarization stood in a clear relationship with
"Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
> Abstract: [...] Here we report links of the two through a systematic quantitative analysis of polarization and entanglement, two optical coherence properties under the wave description of light pioneered by Huygens and Fresnel. A generic complementary identity relation is obtained for arbitrary light fields. More surprisingly, through the barycentric coordinate system, optical polarization, entanglement, and their identity relation are shown to be quantitatively associated with the mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation. The obtained result bridges coherence wave optics and classical mechanics through the two theories of Huygens.
/?hnlog coherent , single photon
[...]
"Photonic quantum Hall effect and multiplexed light sources of large orbital angular momenta (2021) https://www.nature.com/articles/s41567-021-01165-8 .. https://engineering.berkeley.edu/news/2021/02/light-unbound-... :
"Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 .. https://news.ycombinator.com/item?id=39365579
From https://news.ycombinator.com/item?id=39287722 :
> "Room-temperature quantum coherence of entangled multiexcitons in a metal-organic framework" (2024) https://www.science.org/doi/10.1126/sciadv.adi3147
> "A physical [photonic] qubit with built-in error correction" (2024) https://news.ycombinator.com/item?id=39243929
> What about phononic quantum computing though, are phononic wave functions stable/coherent?
> Phase from Intensity because Huygens' classical oscillatory model applies to photons;
Correction:
Phase from second-order Intensity due to "mechanical concepts of center of mass and moment of inertia via the Huygens-Steiner theorem for rigid body rotation"
Parallel axis theorem: https://en.wikipedia.org/wiki/Parallel_axis_theorem :
> The parallel axis theorem, also known as Huygens–Steiner theorem, or just as Steiner's theorem, [1] named after Christiaan Huygens and Jakob Steiner, can be used to determine the moment of inertia or the second moment of area of a rigid body about any axis, given the body's moment of inertia about a parallel axis through the object's center of gravity and the perpendicular distance between the axes.
( Huygens-Fresnel principle > Generalized Huygens' principle > and quantum field theory: https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_princi... :
> [ Kirchoff, Path Integrals (Hamilton, Lorentz, Feynmann,), quabla operator and Minkowski space, Secondary waves and superposition; ]
> Homogeneity of space is fundamental to quantum field theory (QFT) where the wave function of any object propagates along all available unobstructed paths. When integrated along all possible paths, with a phase factor proportional to the action, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets.
Lorentz oscillator model > See also: https://en.wikipedia.org/wiki/Lorentz_oscillator_model )
> Photonic fractional quantum Hall effect
From "Electrons become fractions of themselves in graphene, study finds" (2024) re: the electronic fractional quantum Hall effect without magnetic fields https://news.mit.edu/2024/electrons-become-fractions-graphen... :
> They found that when five sheets of graphene are stacked like steps on a staircase, the resulting structure inherently provides just the right conditions for electrons to pass through as fractions of their total charge, with no need for any external magnetic field.
"Fractional quantum anomalous Hall effect in multilayer graphene" (2024) https://www.nature.com/articles/s41586-023-07010-7
"How can electrons split into fractions of themselves?" https://news.mit.edu/2024/how-can-electrons-can-split-into-f... ... Re: why pentalayer
What was the SNL number of safety razer blades skit?
In rhombohedral trilayer graphene, there is superconductivity at one or more phases;
"Revealing the complex phases of rhombohedral trilayer graphene" (2024) https://www.nature.com/articles/s41567-024-02561-6 .. https://news.ycombinator.com/item?id=40919269
"Revealing the superconducting limit of twisted bilayer graphene" https://news.ycombinator.com/item?id=42051367 :
"Low-Energy Optical Sum Rule in Moiré Graphene" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... https://arxiv.org/abs/2312.03819
"Large quantum anomalous Hall effect in spin-orbit proximitized rhombohedral graphene" (2024) https://www.science.org/doi/10.1126/science.adk9749
"Physicists spot quantum tornadoes twirling in a ‘supersolid’" (2024) https://news.ycombinator.com/item?id=42082690
AI poetry is indistinguishable from human poetry and is rated more favorably
Ernest Davis has an outstanding and definitive rebuttal of these claims: https://cs.nyu.edu/~davise/papers/GPT-Poetry.pdf
The basic problem is that GPT generates easy poetry, and the authors were comparing to difficult human poets like Walt Whitman and Emily Dickinson, and rating using a bunch of people who don't particuarly like poetry. If you actually do like poetry, comparing PlathGPT to Sylvia Plath is so offensive as to be misanthropic: the human Sylvia Plath is objectively better by any possible honest measure. I don't think the authors have any interest in being honest: ignorance is no excuse for something like this.
Given the audience, it would be much more honest if the authors compared with popular poets, like Robert Frost, whose work is sophisticated but approachable.
The vast majority of people seem to prefer Avengers 17 over any cinematic masterpiece, the latest drake song would be better rated than a Tchaikovsky... We should let them play and worship chat gpt if that's what they want to waste their time on
I don't understand the logic of calling superhero movies lesser/unserious like this, it's very snobby. Movies and music are made to be entertaining, the avengers is more entertaining than your "arthouse cinematic masterpiece that nobody likes but it's just because they aren't smart enough to understand it". It's also lazy and ignorant to ignore the sheer manpower that goes into making a movie like that.
Nobody likes art that requires them to think.
On-scalp printing of personalized electroencephalography e-tattoos
Did a quick search for "hair" since in my experience the most obnoxious part of using a traditional EEG cap was having to rinse the gel out of my hair in the sink afterwards (I was in a research study and had to do this multiple times a month, right before heading to class. Always left me looking disheveled). It looks like this approach works better for slightly hairy skin (the person in most of these figures has a buzz cut) than other e-tattoo methods but they do note:
> More work is needed to scan and print on heads with long and thick hairs. Highlighting the persistent issue of racial bias in neuroscience research is crucial, as traditional EEG electrodes are often unreliable on individuals of African descent, resulting in suboptimal patient experiences and outcomes in clinical settings.69,70 Curly hairs tend to push against the EEG cap, reducing contact between the electrodes and the scalp, leading to a poor contact impedance.71 Developing on-scalp digital printing for different hair types should prioritize bridging this important gap.
>> More work is needed to scan and print on heads
> temporary-tattoo-like sensors
> Traditional electroencephalography (EEG) systems involve time-consuming manual electrode placement, conductive liquid gels, and cumbersome cables, which are prone to signal degradation and discomfort during prolonged use. Our approach overcomes these limitations by combining material innovations with non-contact, on-body digital printing techniques to fabricate e-tattoos that are self-drying, ultrathin, and compatible with hairy scalps. These skin-conformal EEG e-tattoo sensors enable comfortable, long-term, high-quality brain activity monitoring without the discomfort associated with traditional EEG systems. Using individual 3D head scans, custom sensor layout design, and a 5-axis microjet printing robot, we have created EEG e-tattoos with precise, tailored placement over the entire scalp. The inks for electrodes and interconnects have slightly different compositions to achieve low skin contact impedance and high bulk conductivity, respectively
"On-scalp printing of personalized electroencephalography e-tattoos" (2024) https://www.cell.com/cell-biomaterials/fulltext/S3050-5623(2...
Can they be printed on a forearm like in the Disney film "Moana 2"?
The robot is completely superfluous. These are conductive ink non-tattoos painted onto the scalp. It would be straightforward to do it by hand.
Handwriting but not typewriting leads to widespread connectivity in brain
I tried for a few minutes to find the HN thread where somebody refuted the "handwriting is better than typing" for fact recall claim.
OTOH, IIRC,
- EEG measures brain activation. It takes more of the brain to hand write than to type, and so it is unsurprising that there is more activation during a writing by hand activity than during a typing activity.
- Do brain connectivity or neural activation correspond to favorable outcomes?
- Synaptic pruning eliminates connectivity in the brain; and that is presumably advantageous.
- Hyperconnectivity may be regarded as a disadvantage in discussions of neurodiversity. Witness recall from a network with intentional hyperconnectivity; too distracted by spurious activations?
- That's a test of dumb recall, which is a test of trivia retention.
- Is rote memorization the pinnacle of learning? To foster creative thinking and critical thinking, is trivia recall optimization ideal?
- (Isn't it necessary to forget so much of what we've been taught in order to succeed.)
- Is spaced repetition as or more effective than hand writing at increasing recall compared to passively listening?
- With persons who have injury and/or disability that prevents them from handwriting, do they have adaptations that enable them to succeed despite "having to" type instead?
- We don't hand write software; though some schools teach coding with pen and paper (and that somewhat reduces cheating, which confounds variance in success and retention).
- We write tests for software; we can't efficiently automatedly test software for quality and correctness unless the code is typed.
- Does anything ever grade, confirm, reject, or validate handwritten notes?
- Are there studies of this that measure recall after typing, and then after handwriting, too?
Edit: A, measure, B, measure; AND: B, measure, A, measure
> I tried for a few minutes to find the HN thread where somebody refuted the "handwriting is better than typing" for fact recall claim.
You would have found it, had you written it down.
Kidding aside, is this the one? https://news.ycombinator.com/item?id=42288545
Physics-Informed Machine Learning: A Survey
"Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications" (2024) https://arxiv.org/abs/2211.08064 :
> We systematically review the recent development of physics-informed machine learning from three perspectives of machine learning tasks, representation of physical prior, and methods for incorporating physical prior. We also propose several important open research problems based on the current trends in the field. We argue that encoding different forms of physical prior into model architectures, optimizers, inference algorithms, and significant domain-specific applications like inverse engineering design and robotic control is far from being fully explored in the field of physics-informed machine learning. We believe that the interdisciplinary research of physics-informed machine learning will significantly propel research progress, foster the creation of more effective machine learning models, and also offer invaluable assistance in addressing long-standing problems in related disciplines.
Re: LLM ml benchmarks; FrontierMath (only 2% of tasks in 2024), TheoremQA: https://news.ycombinator.com/item?id=42097683
PRoot: User-space implementation of chroot, mount –bind, and binfmt_misc
Here's an example of how we've used this.
RStudio Server[0] 1.3 and older hard-coded a number of paths, such as the path for storing temporary files: Instead of looking for the TMPDIR environment variable (as specified by POSIX[1]), R Studio Server would always use /tmp. That is extremely annoying, because we set TMPDIR to a path on fast local storage (SATA or NVMe SSDs) that the job scheduler cleans up at the end of the compute job.
We do have a last-resort mechanism using pam_namespace[2], such that a user going to `/tmp` actually takes them to `/namespace/tmp/${username}`, but that is per-user, not per-job. If a user has two R Studio jobs, and those two jobs landed on the same host, there would be trouble.
So, we used PRoot to wrap R Studio, with /tmp bind-mounted to a directory under TMPDIR.
[0]: https://www.rstudio.com/products/rstudio/download-server/
[1]: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1...
For those that don’t know, you shouldn’t blindly let programs have access to tmp. They can get access to sockets and stuff. If you’re running with systemd there’s a private tmp option for this reason.
It’s always best to sandbox programs when you can. Linux has been making this much easier but it’s still non trivial
From the systemd.exec man page: https://www.freedesktop.org/software/systemd/man/systemd.exe... :
PrivateTmp=true
#JoinsNamespaceOf=
`unshare -m` and then bind-mounting a private /tmp at /tmp/systemd-private-/ does the same thing; `systemd-tmpfiles --help`: https://serverfault.com/questions/1010339/how-exactly-to-use..."`unshare -m` and then bind-mounting a private /tmp at /tmp/systemd-private-/ does the same thing"
As an intermediate level user and sysadmin, this kind of thing underscores the good work systemd is doing making it easy to get sane and safe defaults for things otherwise fiddly enough that many normal people wouldn't bother.
WasmCloud makes strides with WASM component model
"WASI 0.2 Launched" (2024-01) https://news.ycombinator.com/item?id=39289533 :
> At the same time, work towards WASI 0.3, also known as WASI Preview 3, will be getting underway. The major banner of WASI 0.3 is async, and adding the future and stream types to Wit. And here again, the theme is composability. It’s one thing to do async, it’s another to do composable async, where two components that are async can be composed together without either event loop having to be nested inside the other. Designing an async system flexible enough for Rust, C#, JavaScript, Go, and many others, which all have their own perspectives on async, will take some time, so while the design work is starting now, WASI 0.3 is expected to be at least a year away.
WebAssembly/WASI > WASI Preview 2: https://github.com/WebAssembly/WASI/tree/main/wasip2
Secure your URLSession network requests using Certificate Pinning
In a world of growing cyber threats, securing your app’s communication is critical. Learn how to implement Certificate Pinning in Swift with URLSession in order to protect your iOS or macOS apps from man-in-the-middle attacks and keep user data safe.
Do other softwares support specifying a CA bundle per domain or cert pinning?
Should FIPS specify a more limited CA cert bundle and/or cert pinning that users manage?
When the user approves a self-signed cert in the browser, isn't that cert pinning but without PKI risks and assurances?
What about CRL and OCSP; over what channel do they retrieve the cert revocation list; and can CT Certificate Transparency on a blockchain to the browser do better at [pinned] cert revocation?
Also probably relevant to these objectives:
"Chrome switching to NIST-approved ML-KEM quantum encryption" (2024) https://www.bleepingcomputer.com/news/security/chrome-switch...
"A new path for Kyber on the web" (2024) https://security.googleblog.com/2024/09/a-new-path-for-kyber...
Does Swift yet support ML-KEM too?
Neuronal sequences in population bursts encode information in human cortex
"Neuronal sequences in population bursts encode information in human cortex" (2024) https://www.nature.com/articles/s41586-024-08075-8
"Order matters: neurons in the human brain fire in sequences that encode information" (2024) https://www.nature.com/articles/d41586-024-03835-y
"Two-dimensional neural geometry underpins hierarchical organization of sequence in human working memory" (2024) https://www.nature.com/articles/s41562-024-02047-8 .. https://news.ycombinator.com/item?id=42084279
Layer Codes
"Layer codes" (2024) https://www.nature.com/articles/s41467-024-53881-3 :
> Abstract: Quantum computers require memories that are capable of storing quantum information reliably for long periods of time. The surface code is a two-dimensional quantum memory with code parameters that scale optimally with the number of physical qubits, under the constraint of two-dimensional locality. In three spatial dimensions an analogous simple yet optimal code was not previously known. Here we present a family of three dimensional topological codes with optimal scaling code parameters and a polynomial energy barrier. Our codes are based on a construction that takes in a stabilizer code and outputs a three-dimensional topological code with related code parameters. The output codes are topological defect networks formed by layers of surface code joined along one-dimensional junctions, with a maximum stabilizer check weight of six. When the input is a family of good quantum low-density parity-check codes the output codes have optimal scaling. Our results uncover strongly-correlated states of quantum matter that are capable of storing quantum information with the strongest possible protection from errors that is achievable in three dimensions.
"Hynix launches 321-layer NAND" (2024) https://news.ycombinator.com/item?id=42229825
See: "Rowhammer for Qubits" re: whether electron tunneling in current and future generation RAM is sufficient to simulate quantum operators; whether RAM [error correction] is a quantum computer even without the EM off the RAM bus.
Breakthrough Material Perfectly Absorbs 99% of Electromagnetic Waves
"Absorption-Dominant Electromagnetic Interference (EMI) Shielding across Multiple mmWave Bands Using Conductive Patterned Magnetic Composite and Double-Walled Carbon Nanotube Film" (2024) https://onlinelibrary.wiley.com/doi/10.1002/adfm.202406197 :
> Abstract: The revolution of millimeter-wave (mmWave) technologies is prompting a need for absorption-dominant EMI shielding materials. While conventional shielding materials struggle in the mmWave spectrum due to their reflective nature, this study introduces a novel EMI shielding film with ultralow reflection (<0.05 dB or 1.5%), ultrahigh absorption (>70 dB or 98.5%), and superior shielding (>70 dB or 99.99999%) across triple mmWave frequency bands with a thickness of 400 µm. By integrating a magnetic composite layer (MCL), a conductive patterned grid (CPG), and a double-walled carbon nanotube film (DWCNTF), specific resonant frequencies of electromagnetic waves are transmitted into the film with minimized reflection, and trapped and dissipated between the CPG and the DWCNTF. The design factors for resonant frequencies, such as the CPG geometry and the MCL refractive index, are systematically investigated based on electromagnetic wave propagation theories. This innovative approach presents a promising solution for effective mmWave EMI shielding materials, with implications for mobile communication, radar systems, and wireless gigabit communication.
Is this material useful for space travel and also quantum computers?
It says mmWave just, though.
How is it absorbing instead of reflecting energy?
Do things store energy when they absorb [mmWave] EM? As phonons?
Twisted Single Walled Carbon Nanotubes store energy;
From "LG Chem develops material capable of suppressing thermal runaway in batteries" (2024) https://news.ycombinator.com/item?id=41745154 :
> "Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
>> 583 Wh/kg
From "Gamma radiation is produced in large tropical thunderstorms" (2024) https://news.ycombinator.com/item?id=41775612 :
> "Sm2O3 micron plates/B4C/HDPE composites containing high specific surface area fillers for neutron and gamma-ray complex radiation shielding" (2024) https://www.sciencedirect.com/science/article/abs/pii/S02663...
Show HN: TeaTime – distributed book library powered by SQLite, IPFS and GitHub
Recently there seem to be a surge in SQLite related projects. TeaTime is riding that wave...
A couple of years ago I was intrigued by phiresky's post[0] about querying SQLite over HTTP. It made me think that if anyone can publish a database using GitHub Pages, I could probably build a frontend in which users can decide which database to query. TeaTime is like that - when you first visit it, you'll need to choose your database. Everyone can create additional databases[1]. TeaTime then queries it, and fetches files using an IPFS gateway (I'm looking into using Helia so that users are also contributing nodes in the network). Files are then rendered in the website itself. Everything is done in the browser - no users, no cookies, no tracking. LocalStorage and IndexedDB are used for saving your last readings, and your position in each file.
Since TeaTime is a static site, it's super easy (and free) to deploy. GitHub repo tags are used for maintaining a list of public instances[2].
Note that a GitHub repository isn't mandatory for storing the SQLite files or the front end - it's only for the configuration file (config.json) of each database, and for listing instances. Both the instances themselves and the database files can be hosted on Netlify, Cloudflare Pages, your Raspberry Pi, or any other server that can host static files.
I'm curious to see what other kinds of databases people can create, and what other types of files TeaTime could be used for.
[0] https://news.ycombinator.com/item?id=27016630
[1] https://github.com/bjesus/teatime-json-database/
[2] https://github.com/bjesus/teatime/wiki/Creating-a-TeaTime-in...
Do static sites built with sphinx-build or jupyter-book or hugo or other jamstack static site generators work with TeaTime?
sphinx-build: https://www.sphinx-doc.org/en/master/man/sphinx-build.html
There may need to be a different Builder or an extension of sphinxcontrib.serializinghtml.JSONHTMLBuilder which serializes a doctree (basically a DOM document object model) to the output representation: https://www.sphinx-doc.org/en/master/usage/builders/#sphinxc...
datasette and datasette-lite can load CSV, JSON, Parquet, and SQLite databases; supports Full-Text Search; and supports search Faceting. datasette-lite is a WASM build of datasette with the pyodide python distribution.
datasette-lite > Loading SQLite databases: https://github.com/simonw/datasette-lite#loading-sqlite-data...
jupyter-lite is a WASM build of jupyter which also supports sqlite in notebooks in the browser with `import sqlite3` with the python kernel and also with a sqlite kernel: https://jupyter.org/try-jupyter/lab/
jupyterlite/xeus-sqlite-kernel: https://github.com/jupyterlite/xeus-sqlite-kernel
(edit)
xeus-sqlite-kernel > "Loading SQLite databases from a remote URL" https://github.com/jupyterlite/xeus-sqlite-kernel/issues/6#i...
%FETCH <url> <filename> https://github.com/jupyter-xeus/xeus-sqlite/blob/ce5a598bdab...
xlite.cpp > void fetch(const std::string url, const std::string filename) https://github.com/jupyter-xeus/xeus-sqlite/blob/main/src/xl...
> Do static sites built with sphinx-build or jupyter-book or hugo or other jamstack static site generators work with TeaTime?
I guess it depends on what you mean by "work with TeaTime". TeaTime itself is a static site, generated using Nuxt. Nothing that it does cannot be achieved with another stack - at the end it's just HTML, CSS and JS. I haven't tried sphinx-build or jupyter-book, but there isn't a technical reason why Hugo wouldn't be able to build a TeaTime like website, using the same databases.
> datasette-lite > Loading SQLite databases: https://github.com/simonw/datasette-lite#loading-sqlite-data...
I haven't seen datasette before. What are the biggest benefits you think it has over sql.js-httpvfs (which I'm using now)? Is it about the ability to also use other formats, in addition to SQLite? I got the impression that sql.js-httpvfs was a bit more of a POC, and later some possibly better solutions came out, but I haven't really went that rabbit hole to figure out which one would be best.
Edit: looking a little more into datasette-lite, it seems like one of the nice benefits of sql.js-httpvfs is that it doesn't download the whole SQLite database in order to query it. This makes it possible have a 2GB database but still read it in chunks, skipping around efficiently until you find your data.
So the IPFS part is instead of a CDN? Does it replace asset URLs in templates with e.g. cf IPFS gateway URLs with SRI hashes?
datasette-lite > "Could this use the SQLite range header trick?" https://github.com/simonw/datasette-lite/issues/28
From xeus-sqlite-kernel > "Loading SQLite databases from a remote URL" https://github.com/jupyterlite/xeus-sqlite-kernel/issues/6#i... re: "Loading partial SQLite databases over HTTP":
> sql.js-httpvfs: https://github.com/phiresky/sql.js-httpvfs
> sqlite-wasm-http: https://github.com/mmomtchev/sqlite-wasm-http
>> This project is inspired from @phiresky/sql.js-httpvfs but uses the new official SQLite WASM distribution.
Datasette creates a JSON API from a SQLite database, has an optional SQL query editor with canned queries, multi DB query support, docs; https://docs.datasette.io/en/stable/sql_queries.html#cross-d... :
> SQLite has the ability to run queries that join across multiple databases. Up to ten databases can be attached to a single SQLite connection and queried together.
The IPFS part is where the actual files are downloaded from. I wouldn't necessarily say that it's instead of a CDN - it just seems to be a reliable enough way of getting the files (without the pain of setting up a server with all these files myself). Indeed, it fetches them through IPFS Gateways (the Cloudflare IPFS Gateway is unfortunately no longer a thing) using their CID hash.
Teaching quantum physics in schools: Experts encourage focus on 2 state systems
ScholarlyArticle: "Design and evaluation of a questionnaire to assess learners’ understanding of quantum measurement in different two-state contexts: The context matters" (2024) https://journals.aps.org/prper/abstract/10.1103/PhysRevPhysE...
"Making quantum physics easier to digest in schools: Experts encourage focus on two-state systems" https://phys.org/news/2024-11-quantum-physics-easier-digest-...
Tunable ultrasound propagation in microscale metamaterials
These kinds of metamaterials will really open the door to many new applications deemed impossible before. I am curious how (or if) manufacturing these materials at scale will be possible tho.
the tool they used is basically a nanoscale version of a 3D printer that uses UV epoxy. Essentially using a laser focused through a high numerical aperture microscope to focus light and cure the epoxy (there's some nonlinear action here but it isn't that important).
The key point here being that it is a serial process where a single spot is rastered over the sample, unlike conventional optical lithography which is an area illuminating parallel process.
Top Open-Source End-to-End Testing Frameworks of 2024
Recording tests:
Playwright codegen vscode extension: https://playwright.dev/docs/codegen
Selenium IDE: https://www.selenium.dev/selenium-ide/docs/en/introduction/g...
Cypress Recorder exports to Cypress JS code and Puppeteer: https://github.com/cypress-io/cypress-recorder-extension
NightwatchJS Chrome Recorder generates tests from Chrome DevTools Recorder: https://github.com/nightwatchjs/nightwatch-chrome-recorder
Chrome DevTools Recorder exports to Puppeteer JS code: https://developer.chrome.com/docs/devtools/recorder/
Fwupd 2.0.2 Allows Updating Firmware on Many More Devices
> - Raspberry Pi Pico
https://github.com/fwupd/fwupd :
fwupdmgr get-devices
fwupdmgr refresh && fwupdmgr get-updates
fwupdmgr update
Robot Jailbreak: Researchers Trick Bots into Dangerous Tasks
Given that anyone who’s interacted with the LLM field for fifteen minutes should know that “jailbreaks” or “prompt injections” or just “random results” are unavoidable, whichever reckless person decided to hook up LLMs to e.g. flamethrowers or cars should be held accountable for any injuries or damage, just as they would for hooking them up to an RNG. Riding the hype wave of LLMs doesn’t excuse being an idiot when deciding how to control heavy machinery.
Anything that's fast, heavy, sharp, abrasive, hot, high voltage, controls a battery charger, keeps people alive, auto-fires, flys above our heads and around eyes, breakable plastic, spinning fast, interacts with children, persons with disabilities, the elderly and/or the infirm.
If kids can't push it over or otherwise disable it, and there is a risk of exploitation of a new or known vulnerability [of LLMs in general], what are the risks and what should the liability structure be? Do victims have to pay an attorney to sue, or does the state request restitution for the victim in conjunction with criminal prosecution? How do persons prove that chucky bot was compromised at the time of the offense?
SQLite: Outlandish Recursive Query Examples
Firefox bookmarks have nested folders in an arbitrary depth tree, so a recursive CTE might be faster; https://www.google.com/search?q=Firefox+bookmarks+%22CTE%22
(Edit) "Bug 1452376 - Replace GetDescendantFolders with a recursive subquery" https://hg.mozilla.org/integration/autoland/rev/827cc04dacce
"Recursive Queries Using Common Table Expressions" https://gist.github.com/jbrown123/b65004fd4e8327748b650c7738...
storing the tree as an MPTT/NestedSet would massively simplify this, without any subquery shenanigans.
https://en.m.wikipedia.org/wiki/Nested_set_model
https://imrannazar.com/articles/modified-preorder-tree-trave...
There are at least five ways to store a tree in PostgreSQL, for example: adjacency list, nested sets like MPTT Modified Preorder Tree Traversal, nested intervals, Materialized Paths, ltree, JSON
E.g. django-mptt : https://github.com/django-mptt/django-mptt/blob/main/mptt/mo... :
indexed_attrs = (mptt_meta.tree_id_attr,)
field_names = (
mptt_meta.left_attr,
mptt_meta.right_attr,
mptt_meta.tree_id_attr,
mptt_meta.level_attr, )
Nested set model: https://en.wikipedia.org/wiki/Nested_set_model :> The nested interval model stores the position of the nodes as rational numbers expressed as quotients (n/d).
Implementing topologically ordered time crystals on quantum processors
ScholarlyArticle: "Long-lived topological time-crystalline order on a quantum processor" (2024) https://www.nature.com/articles/s41467-024-53077-9
"Robust continuous time crystal in an electron–nuclear spin system" (2024) https://www.nature.com/articles/s41567-023-02351-6 .. https://news.ycombinator.com/item?id=39291044
1990s United States Boom
Notice that it's the "1990s boom", not the "Clinton Boom".
[deleted]
Scientists discover laser light can cast a shadow
Exploits nonlinear optical interactions in a medium (ruby) to cause one beam to cast a shadow from another beam.
Right. Materials with nonlinear optical properties make optical logic possible. Here's an overview of that.[1]
Cross-gain modulation (XGM)
XGM has been investigated extensively to design optical logic gates with SOA. It is assumed that there are two input light beams for each SOA. One is the probe light and the other is the much stronger pump light. The probe light cannot pass through the SOA when the pump light saturates it. The probe light can go through the SOA when the pump light is absent.
[1] https://www.oejournal.org/article/doi/10.29026/oes.2022.2200...
Additional evidence of photons interacting despite common belief, though:
"Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315
Photons affect mass by causing phonons, which affects scattering and re-emission; so thereby photons interact with photons through light-matter photon-phonon interactions.
From the article:
> Absorption of the green laser heats the cube, which changes the phonon population and lattice spacing and, in turn, the opacity.
/? are photons really massless: https://www.google.com/search?q=are+photons+really+massless
Y: because otherwise E=mc^2 is wrong
N: because it's never been proven that photons are massless
N: because photons are affected by black holes' massful gravitational attraction, and/or magnetic fields
N: because "solar wind" and "radiation pressure" cause displacement
If photons are slightly massful, then coherent light photons would draw together over long distances; thus they interact.
"Light and gravitational waves don't arrive simultaneously" https://news.ycombinator.com/item?id=38061551
"Physicists discover that gravity can create light" https://phys.org/news/2023-04-physicists-gravity.html https://news.ycombinator.com/item?id=35633291
Async Django in Production
When will it be common knowledge that introducing async function coloring (on the callee side, instead of just 'go' on the call side) was the biggest mistake of the decade?
gevent was created over a decade ago to solve this.
python users only have themselves to blame for ignoring it.
From "Asyncio, twisted, tornado, gevent walk into a bar" (2023) https://news.ycombinator.com/item?id=37227567 :
> IIRC the history of the async things in TLA in order: Twisted (callbacks), Eventlet (for Second Life by Linden Labs), tornado, gevent; gunicorn, Python 3.5+ asyncio, [uvicorn,]
Async/await > History: https://en.wikipedia.org/wiki/Async/await
Ubitium is developing 'universal' processor combining CPU, GPU, DSP, and FPGA
that's a really long winded way of saying "SoC with FPGA".
From "Universal AI RISC-V processor does it all — CPU, GPU, DSP, FPGA" (2024) https://www.eenewseurope.com/en/universal-ai-risc-v-processo... :
> For over half a century, general-purpose processors have been built on the Tomasulo algorithm, developed by IBM engineer Robert Tomasulo in 1967. It’s a $500B industry built on specialised CPU, GPU and other chips for different computing tasks. Hardware startup Ubitium has shattered this paradigm with a breakthrough universal RISC-V processor that handles all computing workloads on a single, efficient chip — unlocking simpler, smarter, and more cost-effective devices across industries — while revolutionizing a 57-year-old industry standard.
Tomasulo's algorithm: https://en.wikipedia.org/wiki/Tomasulo%27s_algorithm
Intel, AMD, ARM, X-Silicon’s C-GPU RISC architecture, and Cerebras' on-chip SRAM architecture and are all Tomasulo algorithm OOO Out-of-Order execution processor architectures FWIU
The case for open source solutions in HIPAA
From the article: https://allthingsopen.org/articles/the-case-for-open-source-... :
> currently keeping client journal entries with Apple Pages
Those are "unstructured notes" in medical informatics or health informatics.
Health informatics: https://en.wikipedia.org/wiki/Health_informatics
For the medical dictation to transcription to unstructured notes workflow,
OpenAI Whisper is free but not open source, and it hallucinates; https://www.wired.com/story/hospitals-ai-transcription-tools...
Perhaps dictation to unstructured text to chart form fields is the clinical workflow that needs optimization.
Which chart form fields work with other physicians' tools too?
FHIR is an open spec: https://en.wikipedia.org/wiki/Fast_Healthcare_Interoperabili... :
> FHIR builds on previous data format standards from HL7, like HL7 version 2.x and HL7 version 3.x. But it is easier to implement because it uses a modern web-based suite of API technology, including a HTTP-based RESTful protocol, and a choice of JSON [JSON-LD], XML or RDF for data representation.[1] One of its goals is to facilitate interoperability between legacy health care systems, to make it easy to provide health care information to health care providers and individuals on a wide variety of devices from computers to tablets to cell phones, and to allow third-party application developers to provide medical applications which can be easily integrated into existing systems. [2]
> FHIR provides an alternative to document-centric approaches by directly exposing discrete data elements as services. For example, basic elements of healthcare like patients, admissions, diagnostic reports and medications can each be retrieved and manipulated via their own resource URLs.
FHIR spec: https://build.fhir.org/
Which EHRs / EMRs support FHIR?
FHIR / Open Source Implementations: https://confluence.hl7.org/display/FHIR/Open+Source+Implemen...
FHIR / Public Test Servers: https://confluence.hl7.org/display/FHIR/Public+Test+Servers
/? awesome clinical open source: https://www.google.com/search?q=awesome+clinical+open+source
> Perhaps dictation to unstructured text to chart form fields is the clinical workflow that needs optimization.
Medical annotation, Terminology Server, NER: Named Entity Recognition, WSD: Word-sense Disambiguation, AI Summarization, look up and pre-fill the correct forms and form fields in the chart (according to the unstructured notes), Medical Coding
Clinical coding or Medical coding: https://en.wikipedia.org/wiki/Clinical_coder
From https://news.ycombinator.com/item?id=38739890#38744177 re: Linked Data in heath informatics:
> Yeah, so it's at that workflow where they're not getting linked data edges out of their own unstructured data input.
Shadow of a Laser Beam
Discussion (53 points, 6 days ago, 20 comments) https://news.ycombinator.com/item?id=42145919
New theory reveals the shape of a single photon
"Exact Quantum Electrodynamics of Radiative Photonic Environments" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
Show HN: I made an ls alternative for my personal use
Did anyone here use Genera on an original lisp machine? It had a pseudo-graphical interface and a directory listing provided clickable results. It would be really neat if we could use escaping to confer more information to the terminal about what a particular piece of text means.
Feature-request: bring back clickable ls results!
Bonus points for defining a new term type and standard for this.
There's already `ls --hyperlink` for clickable results, but that depends on your terminal supporting the URL escape sequence.
NASA: Mystery of Life's Handedness Deepens
From "Amplification of electromagnetic fields by a rotating body" https://news.ycombinator.com/item?id=41873531 :
> ScholarlyArticle: "Amplification of electromagnetic fields by a rotating body" (2024) https://www.nature.com/articles/s41467-024-49689-w
>> Could this be used as an engine of some kind?
> What about helical polarization?
If there is locomotion due to a dynamic between handed molecules and, say, helically polarized fields; is such handedness a survival selector for life in deep space?
Are chiral molecules more likely to land on earth?
> "Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
> Sugar molecules are asymmetrical / handed, per 3blue1brown and Steve Mould. /? https://www.google.com/search?q=Sugar+molecules+are+asymmetr... https://www.google.com/search?q=Sugar+molecules+are+asymmetr...
> Is there a way to get to get the molecular propeller effect and thereby molecular locomotion, with molecules that contain sugar and a rotating field or a rotating molecule within a field?
Though, a new and plausible terrestrial origin of life hypothesis:
Methane + Gamma radiation => Guanine && Earth thunderstorms => Gamma Radiation https://news.ycombinator.com/item?id=42131762#42157208 :
> A terrestrial life origin hypothesis: gamma radiation mutated methane (CH4) into Glycine (the G in ACGT) and then DNA and RNA.
Hybrid VQE photonic qudits for accurate QC without error-correction techniques
From OT TA NewsArticle: "Photon qubits challenge AI, enabling more accurate quantum computing without error-correction techniques" https://phys.org/news/2024-11-photon-qubits-ai-enabling-accu... :
> VQE is a hybrid algorithm designed to use a Quantum Processing Unit (QPU) and a Classical Processing Unit (CPU) together to perform faster computations.
> Global research teams, including IBM and Google, are investigating it in a variety of quantum systems, including superconducting and trapped-ion systems. However, qubit-based VQE is currently only implemented up to 2 qubits in photonic systems and 12 qubits in superconducting systems, and is challenged by error issues that make it difficult to scale when more qubits and complex computations are required. [...]
> In this study, a qudit was implemented by the orbital angular momentum state of a single-photon, and dimensional expansion was possible by adjusting the phase of a photon through holographic images. This allowed for high-dimensional calculations without complex quantum gates, reducing errors.
ScholarlyArticle: "Qudit-based variational quantum eigensolver using photonic orbital angular momentum states" (2024) https://www.science.org/doi/10.1126/sciadv.ado3472
> In this study, a qudit was implemented by the orbital angular momentum state of a single-photon, and dimensional expansion was possible by adjusting the phase of a photon through holographic images
From https://news.ycombinator.com/item?id=40281756 .. from "Physicists use a 350-year-old theorem to reveal new properties of light waves" (2023) https://phys.org/news/2023-08-physicists-year-old-theorem-re... :
> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity.
> [..] Qian's team interpreted the intensity of a light as the equivalent of a physical object's mass, then mapped those measurements onto a coordinate system that could be interpreted using Huygens' mechanical theorem. "Essentially, we found a way to translate an optical system so we could visualize it as a mechanical system, then describe it using well-established physical equations," explained Qian.
> Once the team visualized a light wave as part of a mechanical system, new connections between the wave's properties immediately became apparent—including the fact that entanglement and polarization stood in a clear relationship with one another.
Shouldn't that make holography and photonic phase sensors and light field cameras possible with existing photographic intensity sensors?
"Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev...
Huygens-Fresnel principle: https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel_princi...
Parallel axis theorem: https://en.wikipedia.org/wiki/Parallel_axis_theorem
VQE: Variational Quantum Eigensolver: https://en.wikipedia.org/wiki/Variational_quantum_eigensolve... :
> Given a guess or ansatz, the quantum processor calculates the expectation value of the system with respect to an observable, often the Hamiltonian, and a classical optimizer is used to improve the guess. The algorithm is based on the variational method of quantum mechanics. [...] It is an example of a noisy intermediate-scale quantum (NISQ) algorithm
From https://news.ycombinator.com/item?id=42030319 :
> "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171
>> We fully resolve this problem, giving a polynomial time algorithm for learning H to precision ϵ from polynomially many copies of the Gibbs state at any constant β>0.
>> Our main technical contribution is a new flat polynomial approximation to the exponential function, and a translation between multi-variate scalar polynomials and nested commutators. This enables us to formulate Hamiltonian learning as a polynomial system. We then show that solving a low-degree sum-of-squares relaxation of this polynomial system suffices to accurately learn the Hamiltonian.
Relaxation (iterative method) https://en.wikipedia.org/wiki/Relaxation_(iterative_method)
Linux 6.13 will report the number of hung tasks since boot
What counts as a hung task? Blocking on unsatisfiable I/O for more than X seconds? Scheduler hasn’t gotten to it in X seconds?
If a server process is blocking on accept(), wouldn’t it count as hung until a remote client connects? or do only certain operations count?
torvalds/linux//kernel/hung_task.c :
static void check_hung_task(struct task_struct *t, unsigned long timeout) https://github.com/torvalds/linux/blob/9f16d5e6f220661f73b36...
static void check_hung_uninterruptible_tasks(unsigned long timeout) https://github.com/torvalds/linux/blob/9f16d5e6f220661f73b36...
Just to double check my understanding (because being wrong on the internet is perhaps the fastest way to get people to check your work):
Is this saying that regular tasks that haven't been scheduled for two minutes and tasks that are uninterruptible (truly so, not idle or also killable despite being marked as uninterruptible) that haven't been woken up for two minutes are counted?
Show HN: Llama 3.2 Interpretability with Sparse Autoencoders
I spent a lot of time and money on this rather big side project of mine that attempts to replicate the mechanistic interpretability research on proprietary LLMs that was quite popular this year and produced great research papers by Anthropic [1], OpenAI [2] and Deepmind [3].
I am quite proud of this project and since I consider myself the target audience for HackerNews did I think that maybe some of you would appreciate this open research replication as well. Happy to answer any questions or face any feedback.
Cheers
[1] https://transformer-circuits.pub/2024/scaling-monosemanticit...
[2] https://arxiv.org/abs/2406.04093
[3] https://arxiv.org/abs/2408.05147
The relative performance in err/watts/time compared to deep learning for feature selection instead of principal component analysis and standard xgboost or tabular xt TODO for optimization given the indicating features.
XAI: Explainable AI: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
/? XAI , #XAI , Explain, EXPLAIN PLAN , error/energy/time
From "Interpretable graph neural networks for tabular data" (2023) https://news.ycombinator.com/item?id=37269881 :
> TabPFN: https://github.com/automl/TabPFN .. https://x.com/FrankRHutter/status/1583410845307977733 [2022]
"TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second" (2022) https://arxiv.org/abs/2308.08945
> FWIU TabPFN is Bayesian-calibrated/trained with better performance than xgboost for non-categorical data
From https://news.ycombinator.com/item?id=34619013 :
> /? awesome "explainable ai" https://www.google.com/search?q=awesome+%22explainable+ai%22
- (Many other great resources)
- https://github.com/neomatrix369/awesome-ai-ml-dl/blob/master... :
> Post model-creation analysis, ML interpretation/explainability
> /? awesome "explainable ai" "XAI"
"A Survey of Privacy-Preserving Model Explanations: Awesome Privacy-Preserving Explainable AI" https://awesome-privex.github.io/
Show HN: We open-sourced our compost monitoring tech
I'm from a compost tech startup (Monty Compost Co.) focused on making composting more efficient for households and industrial facilities. But our tech isn’t just for composting— it’s a versatile system that can be repurposed for a wide range of applications. So, we’ve made it open source for anyone to experiment with!
One of the exciting things about our open-source compost monitoring tech is its flexibility. You can connect it to platforms like Raspberry Pi, Arduino, or other single-board computers to expand its capabilities or integrate it into your own projects.
Our system includes sensors for: * Gas composition * Temperature * Moisture levels * Air pressure
All data can be exported as CSV files for analysis. While it’s originally built for monitoring compost, the hardware and data capabilities are versatile and could be repurposed for other applications (IoT, environmental monitoring, etc.)
Hacker’s Guide to Monty Tech: https://github.com/gtls64/MontyHome-Hackers-Guide
If you’re into data, sensors, or creative tech hacks, we’d love for you to check it out and let us know what you build!
collectd is an open source monitoring system which can record to e.g. RRD flat files or SQLite and can forward collected metrics to SIEM-like monitoring and charting and anomaly detection apps like Grafana or InfluxDB.
Nagios has "state flaping detection" to prevent spurious notifications.
collectd-python-plugins includes Python scripts for monitoring humidity and temp with i2c sensors and Python: https://github.com/dbrgn/collectd-python-plugins
There are LoraWAN soil moisture sensors, but they require batteries or an in-field charging method
"Satellite images of plants' fluorescence can predict crop yields" (2024)
"Sensor-Free Soil Moisture Sensing Using LoRa Signals (2022)" https://dl.acm.org/doi/abs/10.1145/3534608 .. https://news.ycombinator.com/context?id=40234912
/? open source soil moisture sensor: https://www.google.com/search?q=open+source+soil+moisture+se...
Thanks so much for sharing these resources—this is fantastic!
If you’re into LoRaWAN, you might be interested to hear that we’re also developing an industrial composting monitor that incorporates LoRaWAN tech. Here’s the promo video link if you’d like to check it out: https://www.youtube.com/watch?v=TZFiiwLhZh8&feature=youtu.be
Can your sensor product feed data to open source software for hobbyist and professional agriculture?
I set up FarmOS in a container once; the PWA approach to the offline mobile app was cool but I guess I wasn't that committed to manual data collection or hobbyist gardening.
Are there open standards to support architectural sensor data?
Where is the identifier on the sensor? How does the user scan the visually-confirmable sensor barcode or QR code or similar and associate that with a garden bed or a container?
How does it notify of low battery status; is there a voltage reading to predict the out of battery condition?
Is there a configurable polling interval?
How do I find a sensor unit with a dead battery; is there a low-power chirp, or do I need a metal detector or very directional wireless sensors and triangulation or trilateration?
Are there nooks and crannies in the casing?
How to replace the battery?
Can they be made out of compostable materials? E.g. carbon with existing nanofabrication capabilities
After Single Walled Twisted Carbon Nanotube batteries which are unfortunately still only in the lab, and more practically Sodium Ion, which batteries can safely be discarded or recycled in the agricultural field?
LoRaWAN may be more economical than multiple directional long range WiFi antenna like can be found on YouTube. https://youtu.be/GWq6L94ImX8
Notes on LoRa and OpenWRT, which also supports rtl-sdr, BATMAN wifi mesh networking, and (dual) Mini PCIe 4G radios: https://news.ycombinator.com/item?id=22735933
Thank you so much for these questions! For clarity, I’ll copy and paste the question for each response:
Q: Can your sensor product feed data to open-source software for hobbyist and professional agriculture?
A: Yes, our sensors can definitely feed data into open-source platforms, making them a great fit for both hobbyist and professional agriculture setups. The guide offers a helpful starting point for integration. Additionally, data collected by the sensors can be exported as a CSV through our consumer app. This makes it easy to process the information or import it into FarmOS or other open-source tools you might be using.
Q: Where is the identifier on the sensor? How does the user scan the visually-confirmable sensor barcode or QR code or similar and associate that with a garden bed or a container?
A: Each sensor uses its unique MAC address as its name. To make setup even easier, each device features a visually confirmable barcode or QR code. Once connected, the sensor’s indicator lights confirm the connection status, so you’ll know right away when it’s properly associated.
Q: How does it notify of low battery status; is there a voltage reading to predict the out-of-battery condition? Is there a configurable polling interval?
A: For low battery notifications, the device features a red indicator light that activates when the battery is running low. Additionally, you can poll the device over Bluetooth to get the current battery level.
Q: How do I find a sensor unit with a dead battery; is there a low-power chirp, or do I need a metal detector or very directional wireless sensors and triangulation or trilateration?
A: A low battery is signalled by the sensor’s lights turning orange, while a dead battery is indicated by the absence of flashing blue lights. If the sensor still has some power, you can poll it via Bluetooth to check its battery level.
Q: Are there nooks and crannies in the casing?
A: Yes, if you’re thinking of something specific for this — please let me know! I’m happy to chat further :)
Q: How to replace the battery?
A: Currently, the battery in our sensors is not replaceable. However, when the device reaches the end of its life, we’re committed to sustainability. We plan to offer users a significant replacement discount and take back the module and responsibly recycle it into new Montys. Interestingly, the original Monty design included a removable battery pack. Through testing, we discovered that most connectors weren’t durable enough to withstand the tough composting environment, so we shifted to a sealed design to ensure long-term reliability.
Q: Can they be made out of compostable materials? E.g. carbon with existing nanofabrication capabilities
A: We’ve trialed biodegradable plastics in the past, but we found that they degraded in the field. Instead, we’ve opted for a 100% recyclable plastic material to ensure that it is able top withstand harsh compost conditions.
Hopefully, I’ve covered everything here—if you have any further questions, just let me know!
/? crop monitoring wikipedia: https://www.google.com/search?q=crop+monitoring+wikipedia ...
Precision agriculture: https://en.wikipedia.org/wiki/Precision_agriculture
Digital agriculture: https://en.wikipedia.org/wiki/Digital_agriculture
/? crop monitoring system site:github.com https://www.google.com/search?q=crop+monitoring+system+site%...
SIEM: https://en.wikipedia.org/wiki/Security_information_and_event...
Show HN: Retry a command with exponential backoff and jitter (+ Starlark exprs)
This looks cool - will give it a try (hah!) Curious on why you picked starlark instead of cel for the conditional scripting part?
All right, let me tell you the history of recur to explain this choice. :-)
I wrote the initial version in Python in 2023 and used simpleeval [1] for the condition expressions. The readme for simpleeval states:
> I've done the best I can with this library - but there's no warranty, no guarantee, nada. A lot of very clever people think the whole idea of trying to sandbox CPython is impossible. Read the code yourself, and use it at your own risk.
In early 2024, simpleeval had a vulnerability report that drove the point home [2]. I wanted to switch to a safer expression library in the rare event someone passed untrusted arguments to `--condition` and as a matter of craft. (I like simpleeval, but I think it should be used for expressions in trusted or semi-trusted environments.)
First, I evaluated cel-python [3]. I didn't adopt it over the binary dependency on PyYAML and concerns about maturity. I also wished to avoid drastically changing the condition language.
Next in line was python-starlark-go [4], which I had only used in a project that ultimately didn't need complex config. I had been interested in Starlark for a while. It was an attractive alternative to conventional software configuration. I saw an opportunity to really try it.
A switch to python-starlark-go would have made platform-independent zipapps I built with shiv [5] no longer an option. This was when I realized I might as well port recur to Go, use starlark-go natively, and get static binaries out of it. I could have gone with cel-go, but like I said, I was interested in Starlark and wanted to keep expressions similar to how they were with simpleeval.
[1] https://github.com/danthedeckie/simpleeval
[2] https://github.com/danthedeckie/simpleeval/issues/138
[3] https://github.com/cloud-custodian/cel-python
There's not yet a Python implementation of Starlark (which, like Bazel, is a fork of Skylark FWIU)?
All that have tried to sandbox Python with Python have failed. E.g. RestrictedPython and RPython
I believe Starlark was renamed Skylark, even internally. Bazel is a build system that uses Starlark as a configuration language, not a fork of Starlark.
Bazel is an open source rewrite of Blaze (which introduced Skylark)
Less a rewrite, more a variant: a majority of the source is common to both.
Systemd does exponential retry but IDK about jitter?
A systemd unit file with RestartMaxDelaySec= for exponential back off:
[Unit]
Description="test exponential retry"
[Service]
ExecStart=-/bin/sh -c "date -Is"
Type=simple
Restart=on-failure
RestartSec=60ms
RestartMaxDelaySec=¶
RestartSteps=
# StartLimitBurst=2
# StartLimitIntervalSec=30
# OnFailure=
# TimeoutStartSec=
# TimeoutStartFailureMode=
# WatchdogSec=
# RestartPreventExitStatus=
# RestartForceExitStatus=
# OOMPolicy=
[Install]
WantedBy=default.target
man systemd.service
https://www.freedesktop.org/software/systemd/man/latest/syst...man systemd.unit https://www.freedesktop.org/software/systemd/man/latest/syst...
man systemd.resource-control https://www.freedesktop.org/software/systemd/man/latest/syst...
man systemd.service > "Table 1. Exit causes and the effect of the Restart= settings" https://www.freedesktop.org/software/systemd/man/latest/syst...
"RFE: Exponentially increasing RestartSec=" https://github.com/systemd/systemd/issues/6129#issuecomment-...
"core: add RestartSteps= and RestartSecMax= to dynamically increase interval between auto restarts" https://github.com/systemd/systemd/pull/26902
> RestartSteps= accepts a positive integer as the number of steps to take to increase the interval between auto-restarts from RestartSec= to RestartMaxDelaySec=
/? RestartMaxDelaySec https://www.google.com/search?q=RestartMaxDelaySec
Is this where to add jitter?
"core/service: make restart delay increase more smoothly" https://github.com/systemd/systemd/pull/28266/files
Show HN: Physically accurate black hole simulation using your iPhone camera
Hello! We are Dr. Roman Berens, Prof. Alex Lupsasca, and Trevor Gravely (PhD Candidate) and we are physicists working at Vanderbilt University. We are excited to share Black Hole Vision: https://apps.apple.com/us/app/black-hole-vision/id6737292448.
Black Hole Vision simulates the gravitational lensing effects of a black hole and applies these effects to the video feeds from an iPhone's cameras. The application implements the lensing equations derived from general relativity (see https://arxiv.org/abs/1910.12881 if you are interested in the details) to create a physically accurate effect.
The app can either put a black hole in front of the main camera to show your environment as lensed by a black hole, or it can be used in "selfie" mode with the black hole in front of the front-facing camera to show you a lensed version of yourself.
As far as I can tell, the black hole's you're generating don't look especially correct in the preview: they should have a circular shadow like this https://i.imgur.com/zeShgrx.jpeg
What the black hole looks like depends on how you define your field of view. And if the black hole is spinning, then you don't expect a circular shadow at all. But in our app, if you pick the "Static black hole" (the non-rotating, Schwarzschild case) and select the "Full FOV" option, then you will see the circular shadow that you expect.
The preview shows that you have a static black hole selected
This shape of the shadow is also wrong for kerr though, this is what kerr looks like:
"static" is "Shwarzchild without rotation"?
Do black holes have hair?
Where is the Hawking radiation in these models? Does it diffuse through the boundary and the outer system?
What about black hole jets?
What about vortices? With Gross-Pitaevskii and SQR Superfluid Quantum Relativity
https://westurner.github.io/hnlog/ Ctrl-F Fedi , Bernoulli, Gross-Pitaevskii:
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/ :
> FWIU: also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter.
Actual observations of black holes;
"This image shows the observed image of M87's black hole (left) the simulation obtained with a General Relativistic Magnetohydrodynamics model, blurred to the resolution of the Event Horizon Telescope [...]" https://www.reddit.com/r/space/comments/bd59mp/this_image_sh...
"Stars orbiting the black hole at the heart of the Milky Way" ESO. https://youtube.com/watch?v=TF8THY5spmo&
"Motion of stars around Sagittarius A*" Keck/UCLA. https://youtube.com/shorts/A2jcVusR54E
/? M87a time lapse
/? Sagittarius A time lapse
/? black hole vortex dynamics
"Cosmic Simulation Reveals How Black Holes Grow and Evolve" (2024) https://www.caltech.edu/about/news/cosmic-simulation-reveals...
"FORGE’d in FIRE: Resolving the End of Star Formation and Structure of AGN Accretion Disks from Cosmological Initial Conditions" (2024) https://astro.theoj.org/article/94757-forge-d-in-fire-resolv...
STARFORGE
GIZMO: http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html .. MPI+OpenMP .. Src: https://github.com/pfhopkins/gizmo-public :
> This is GIZMO: a flexible, multi-method multi-physics code. The code solves the fluid using Lagrangian mesh-free finite-volume Godunov methods (or SPH, or fixed-grid Eulerian methods), and self-gravity with fast hybrid PM-Tree methods and fully-adaptive resolution. Other physics include: magnetic fields (ideal and non-ideal), radiation-hydrodynamics, anisotropic conduction and viscosity, sub-grid turbulent diffusion, radiative cooling, cosmological integration, sink particles, dust-gas mixtures, cosmic rays, degenerate equations of state, galaxy/star/black hole formation and feedback, self-interacting and scalar-field dark matter, on-the-fly structure finding, and more.
Webvm: Virtual Machine for the Web
/? "webvm" jupyter: https://www.google.com/search?q=%22webvm%22+jupyter
- "Is WebVM a potential solution to "JupyterLite doesn't have a bash/zsh shell"?" https://news.ycombinator.com/item?id=30167403#30168491 :
- [ ] "ENH: Terminal and Shell: BusyBox, bash/zsh, git; WebVM," https://github.com/jupyterlite/jupyterlite/issues/949
And then now WASM containers in the browser FWIU
Convolutional Differentiable Logic Gate Networks
The Solvay-Kitaev algorithm for quantum logical circuit construction in context to "Artificial Intelligence for Quantum Computing" (2024): https://news.ycombinator.com/item?id=42155909#42157508
From https://news.ycombinator.com/item?id=37379123 :
Quantum logic gate > Universal quantum gates: https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_q... :
> Some universal quantum gate sets include:
> - The rotation operators Rx(θ), Ry(θ), Rz(θ), the phase shift gate P(φ)[c] and CNOT are commonly used to form a universal quantum gate set.
> - The Clifford set {CNOT, H, S} + T gate. The Clifford set alone is not a universal quantum gate set, as it can be efficiently simulated classically according to the Gottesman–Knill theorem.
> - The Toffoli gate + Hadamard gate. ; [[CCNOT,CCX,TOFF], H]
> - [The Deutsch Gate]
"Convolutional Differentiable Logic Gate Networks" (2024) https://arxiv.org/abs/2411.04732 :
> With the increasing inference cost of machine learning models, there is a growing interest in models with fast and efficient inference. Recently, an approach for learning logic gate networks directly via a differentiable relaxation was proposed. Logic gate networks are faster than conventional neural network approaches because their inference only requires logic gate operators such as NAND, OR, and XOR, which are the underlying building blocks of current hardware and can be efficiently executed. We build on this idea, extending it by deep logic gate tree convolutions, logical OR pooling, and residual initializations. This allows scaling logic gate networks up by over one order of magnitude and utilizing the paradigm of convolution. On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.
Quick Start to Quantum Programming with CUDA-Q
NVIDIA/cuda-q-academic: https://github.com/NVIDIA/cuda-q-academic
NVIDIA/cuda-quantum; CUDA-Q: src: https://github.com/NVIDIA/cuda-quantum .. docs: https://nvidia.github.io/cuda-quantum/latest/
tequilahub/tequila issues > "CUDA-Q": https://github.com/tequilahub/tequila/issues/374
Matching patients to clinical trials with large language models
"Matching Patients to Clinical Trials with Large Language Models" (2024) https://arxiv.org/abs/2307.15051
Src: https://github.com/ncbi-nlp/TrialGPT
NewsArticle: "NIH-developed AI algorithm matches potential volunteers to clinical trials" (2024) https://www.nih.gov/news-events/news-releases/nih-developed-... :
> Such an algorithm may save clinicians time and accelerate clinical enrollment and research [...]
> A study published in Nature Communications found that the AI algorithm, called TrialGPT, could successfully identify relevant clinical trials for which a person is eligible and provide a summary that clearly explains how that person meets the criteria for study enrollment. The researchers concluded that this tool could help clinicians navigate the vast and ever-changing range of clinical trials available to their patients, which may lead to improved clinical trial enrollment and faster progress in medical research.
Voyager 1 breaks its silence with NASA via radio transmitter not used since 1981
I didn’t realize the Voyagers relied on a once in a 175 year planetary alignment. What a lucky break technology had advanced to the point we could make use of it.
I’m trying to figure out why the alignment was so important.
Wouldn’t it be possible to get a gravity assist in any alignment - albeit just taking a little longer to ping-pong across the system?
The scenario where a gravity assist doesn't work is if turning sharply enough would require your minimum planetcentric approach distance to be less than the radius of the planet - you'd crash into it instead.
This is why Voyager 2 couldn't also do Pluto - it would have needed to change course by roughly 90º at Neptune, which would have required going closer to the center of Neptune than Neptune's own radius.
The most unusual gravity-assist alignment that we did was for Pioneer 11 going from Jupiter to Saturn. The encounters were separated by roughly 120º of heliocentric longitude. Pioneer 11 used Jupiter to bend its path "up" out of the ecliptic plane and encountered Saturn on the way back "down". Nowadays we wouldn't bother doing that (we'd wait for a more direct launch window instead), but the purpose of this was to get preliminary Jupiter and Saturn encounters done in time before Voyager's launch window for the grand tour alignment.
Could Voyager have reduced velocity, glanced off Neptune, waited for the return path on a narrow elliptical orbit, and then boosted to effectively make the 90° turn to Pluto at that time? Was that impossible given its Neptune approach trajectory, or would glancing off and waiting have been more fuel?
And, why didn't this vortical model that includes the forward velocity of the sun make a difference for Voyager's orbital trajectory and current position relative to earth? https://news.ycombinator.com/item?id=42159195 :
> "The helical model - our solar system is a vortex" https://youtube.com/watch?v=0jHsq36_NTU
Breaking that down: Voyager itself couldn't have reduced velocity, it had nowhere near enough reaction mass to do that. Hypothetically a spacecraft could but you might be talking about orders of magnitude more reaction mass. (Which means multiples more of the fuel to launch and accelerate that mass itself, which could quickly escalate beyond any chemical rocket capabilities.)
It also likely wasn't possible to get to Pluto on some future Pluto orbital pass. The limiting factor is likely that Voyager's incoming trajectory to Neptune was already too far beyond solar escape velocity to get into that narrow elliptical orbit you propose. (You'd have to slingshot so close to Neptune's center that you'd hit the planet instead.)
Designing from the beginning to come in slower to Neptune and adjust to encounter Pluto on some future Pluto orbital pass was probably possible, but yeah you might be talking about time scales of Pluto's entire orbit or even multiples of that. (We do similar things for inner solar system missions, like several encounters with Venus separated by multiple Venus-years, but that's on the order of single-digit years and not hundreds.)
The common answer to a lot of these outlandish slingshot questions is usually, yes it's eventually possible by orbital mechanics, but it gets so complicated and lengthy that you may as well just build another separate spacecraft instead. We talk about Voyager's grand tour alignment because it's captivating, but realistically if that hadn't happened we would have just done separate Jupiter-Uranus and Jupiter-Neptune missions instead.
The sun's motion relative to the galaxy doesn't matter for any of this - nothing else in the galaxy is remotely close enough to affect anything, the nearest star is still over 1000x Voyager's distance.
Is it possible to focus on a reflection on the Voyager spacecraft; or how aren't communications ever affected by lack of line of sight?
So there was no way to flip around and counter-thrust due to the velocity by that point in Voyager's trajectory (without a gravitationally-assisted slowdown or waiting for planetary orbits to align the same or in a feasible way)
FWICS; /? spirograph ... "Hypotrochoid" ... Hypotrochoid orbit
Aren't there hypotrochoid orbits to accelerate and decelerate using planetary gravity; gravity assist
Gravity assist: https://en.wikipedia.org/wiki/Gravity_assist
- https://space.stackexchange.com/questions/10021/how-are-grav... :
> How can I intuitively understand gravity assists?:
> Aim closer to the planet for a lower pass for a greater change in direction (and velocity from an external frame of reference), farther from the planet for a smaller change; aim ahead of the planet for a slower resulting external velocity, behind for a higher velocity: gravity assist guide (image from this KSP tutorial)
- Vindication! KSP is what I probably would have used to answer questions like this; though KSP2 doesn't work in Proton-GE on Steam on Linux and they've since disbanded / adjourned the KSP2 team fwiu.
- JPL SPICE toolkit: https://naif.jpl.nasa.gov/naif/toolkit.html
- SpiceyPy; src: https://github.com/AndrewAnnex/SpiceyPy docs: https://spiceypy.readthedocs.io/en/stable/
- SpiceyPy docs > Lessons: https://spiceypy.readthedocs.io/en/stable/lessonindex.html :
- > various SPICE lessons provided by the NAIF translated to use python code examples
TY for the explanation.
Hopefully cost effective solar sails are feasible.
FWIU SQR Superfluid Quantum Relativity doesn't matter for satellites at Earth-Sun Lagrangian points either; but, at Lagrangian points, don't they have to use thrust to rotate to account for the gravitational 'wind' due to additional local masses in the (probably vortical) n-body attractor system?
Bpftune uses BPF to auto-tune Linux systems
With this tool I am wary that I'll encounter system issues that are dramatically more difficult to diagnose and troubleshoot because I'll have drifted from a standard distro configuration. And in ways I'm unaware of. Is this a reasonable hesitation?
Yes, it is. IMO, except for learning (which should not be done in prod), you shouldn’t make changes that you don’t understand.
The tools seems to mostly tweak various networking settings. You could set up a test instance with monitoring, throw load at it, and change the parameters the tool modifies (one at a time!) to see how it reacts.
I'd run such a tool on prod in "advice mode". It should suggest the tweaks, explaining the reasoning behind them, and listing the actions necessary to implement them.
Then humans would decide if they want to implement that as is, partly, modified, or not at all.
Fair point, though I didn’t see any such option with this tool.
It's developed in the open; we can create Github issue.
In the existing issue, we can link to the code and docs that would need to be understood and changed:
usage, main() https://github.com/oracle/bpftune/blob/6a50f5ff619caeea6f04d...
- [ ] CLI opts: --pretend-allow <tuner> or --log-only-allow <tuner> or [...]
Probably relevant function headers in libbpftune.c:
bpftune_sysctl_write(
bpftuner_tunable_sysctl_write(
bpftune_module_load(
static void bpftuner_scenario_log(struct bpftuner *tuner, unsigned int tunable, ; https://github.com/oracle/bpftune/blob/6a50f5ff619caeea6f04d... https://github.com/oracle/bpftune/blob/6a50f5ff619caeea6f04d...
Here's that, though this is dnvted to -1? https://github.com/oracle/bpftune/issues/99#issuecomment-248...
Ideally this tool could passively monitor and recommend instead of changing settings in production which could lead to loss of availability by feedback failure; -R / --rollback actively changes settings, which could be queued or logged as idk json or json-ld messages.
Transistor for fuzzy logic hardware: promise for better edge computing
I've heard it said that, we could do quantum computational operations with analog electronic components except for the variance in component quality.
How do these fuzzy logic electronic components overcome the same challenges as analog electronic quantum computing?
The article describes performance on a visual CNN Convolutional Neural Network ML task.
Is quantum logic the only sufficient logic to describe systems with phase?
What is most production cost and operating cost efficient at this or similar ML tasks?
For reference, from https://news.ycombinator.com/item?id=41322088 re: "A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
> The TPU is constructed with a systolic array architecture that allows parallel 2 bit integer multiply–accumulate operations. A five-layer convolutional neural network based on the TPU can perform MNIST image recognition with an accuracy of up to 88% for a power consumption of 295 µW. We use an optimized nanotube fabrication process [...] 1 TOPS/W/s
ScholarlyArticle: "A van der Waals interfacial junction transistor for reconfigurable fuzzy logic hardware" (2024) https://www.nature.com/articles/s41928-024-01256-3
Two galaxies aligned in a way where their gravity acts as a compound lens
If the lens curved light back toward us, could we see earth several million years ago?
Technically? But the image would be very very very small, so we'd need a detector bigger than the solar system (guesstimate) to see it. That's to see it: I can't imagine what it would take to resolve the image. The tricks in this paper are a start.
To zoom into a reflection on a lens or a water droplet?
From "Hear the sounds of Earth's magnetic field from 41,000 years ago" (2024) https://news.ycombinator.com/item?id=42010159 :
> [ Redshift, Doppler effect, ]
> to recall Earth's magnetic field from 41,000 years ago with such a method would presumably require a reflection (41,000/2 = 20,500) light years away
To see Earth in a reflection, though
Age of the Earth: https://en.wikipedia.org/wiki/Age_of_Earth :
> 4.54 × 10^9 years ± 1%
"J1721+8842: The first Einstein zig-zag lens" (2024) https://arxiv.org/abs/2411.04177v1
What is the distance to the centroid of the (possibly vortical ?) lens effect from Earth in light years?
/? J1721+8842 distance from Earth in light years
- https://www.iflscience.com/first-known-double-gravitational-... :
> The first lens is relatively close to the source, with a distance estimated at 10.2 billion light-years. What happens is that the quasar’s light is magnified and multiplied by this massive galaxy. Two of the images are deflected in the opposite direction as they reach the second lens, another massive galaxy. The path of the light is a zig-zag between the quasar, the first lens, and then the second one, which is just 2.3 billion light-years away
So, given a simplistic model with no relative motion between earth and the presumed constant location lens:
Earth formation: 4.54b years ago
2.3b * 2 = 4.6b years ago
10.2b * 2 = 20.4b years ago
Does it matter that our models of the solar systems typically omit that the sun is traveling through the universe (with the planets swirling now coplanarly and trailing behind), and would the relative motion of a black hole at the edge of our solar system change the paths between here and a distant reflector over time?"The helical model - our solar system is a vortex" https://youtube.com/watch?v=0jHsq36_NTU
YC is wrong about LLMs for chip design
IDK about LLMs there either.
A non-LLM monte carlo AI approach: "Pushing the Limits of Machine Design: Automated CPU Design with AI" (2023) https://arxiv.org/abs/2306.12456 .. https://news.ycombinator.com/item?id=36565671
A useful target for whichever approach is most efficient at IP-feasible design:
From https://news.ycombinator.com/item?id=41322134 :
> "Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490
Artificial Intelligence for Quantum Computing
My master thesis was on using machine learning techniques to synthesise quantum circuits. Since any operation on a QC can be represented as a unitary matrix my research topic was that, using ML, given a set of gates, how many and in what arrangement of them you could generate or at least approximate this matrix. Another aspect was, given a unitary matrix, could a neural network predict a number of gates needed to simulate that matrix as a QC and thereby give us a measure of complexity. It was a lot of fun to test different algorithms from genetic algorithms to neural network architectures. Back then NNs were a lot smaller and I trained them mostly on one GPU and since the matrices get exponentially bigger with the amount of qbits in the circuit it was only possible for me to investigate small circuits with less than a dozen qubits but it was still nice to see that in principle this worked quite well.
How did it do compared to the baseline of the solvoy kitaev algorithm (and the more advanced algorithms that experts have come up with since)?
How (if at all) can the ML approach come up with a circuit when the unitary is too big to explicitly represent it as all the terms in the matrix but the general form of it is known? Eg it is known what the quantum fourier transform on N qubits is defined to be, and consequently any particular element within its matrix is easy to calculate, but you don't need (and shouldn't try) to write it out as a 2^n x 2^n matrix to figure out its implementation.
Solvay-Kiteav theorem: https://en.wikipedia.org/wiki/Solovay%E2%80%93Kitaev_theorem
/? Solvay-Kiteav theorem Cirq QISkit: https://www.google.com/search?q=Solvay-Kiteav+theorem+cirq+q...
qiskit/transpiler/passes/synthesis/solovay_kitaev_synthesis.py: https://github.com/Qiskit/qiskit/blob/main/qiskit/transpiler...
qiskit/synthesis/discrete_basis/solovay_kitaev.py: https://github.com/Qiskit/qiskit/blob/stable/1.2/qiskit/synt...
SolovayKitaevDecomposition: https://docs.quantum.ibm.com/api/qiskit/qiskit.synthesis.Sol...
What are more current alternatives to the Solvay-Kiteav theorem for gate-based quantum computing?
The more current alternatives (iirc) are usually specific to the particular hardware and the gates its capable of rather than a general solution.
Though to be fair this topic was tangential to my focus and I'm a bit rusty on it so I'd have to look around to answer that.
Gamma rays convert CH4 to complex organic molecules, may explain origin of life
ScholarlyArticle: "γ-Ray Driven Aqueous-Phase Methane Conversions into Complex Molecules up to Glycine" (2024) https://onlinelibrary.wiley.com/doi/10.1002/anie.202413296 :
> Abstract: γ-Ray is capable of driving an aqueous-phase CH4+H2O+O2 conversion selectively into CH3COOH via ⋅CH3 and ⋅OH radical intermediates, and driving an aqueous-phase CH3COOH+NH3 conversion into NH2CH2COOH via ⋅CH2COOH and ⋅NH2 radical intermediates. Such γ-ray driven aqueous-phase methane conversions probably play an important role in the formation network of complex organic molecules in the universe, and provides a strategy to convert abundant CH4 into value-added products at mild conditions.
Glycine: https://en.wikipedia.org/wiki/Glycine :
> Glycine (symbol Gly or G;) is an amino acid that has a single hydrogen atom as its side chain. It is the simplest stable amino acid (carbamic acid is unstable). Glycine is one of the proteinogenic amino acids. It is encoded by all the codons starting with GG (GGU, GGC, GGA, GGG). [8]
"Gamma radiation is produced in large tropical thunderstorms" (2024) https://news.ycombinator.com/item?id=41726698 :
"Flickering gamma-ray flashes, the missing link between gamma glows and TGFs" (2024) https://www.nature.com/articles/s41586-024-07893-0
Virtual black hole: https://en.wikipedia.org/wiki/Virtual_black_hole ; "planck relics in the quantum foam"
Gamma ray: https://en.wikipedia.org/wiki/Gamma_ray
How are gamma rays causally linked to virtual black holes or planck relics in the quantum foam?
And, is there Hawking radiation from virtual black holes, at what points in their lifecycle?
Window coating reflects heat to cool buildings by 40 degrees
"Neutral-Colored Transparent Radiative Cooler by Tailoring Solar Absorption with Punctured Bragg Reflectors" (2024) https://onlinelibrary.wiley.com/doi/10.1002/adfm.202410613
Memories are not only in the brain, human cell study finds
This brings up the question about whether there are hereditary information transmission methods other than DNA. There are so many things we ascribe to “instinct” that might be information transmitted from parent to offspring in some encoded format.
Like songs that newborn songbirds know, migration routes that animals know without being shown, that a mother dog should break the amniotic sac to release the puppies inside, what body shapes should be considered more desirable for a mate out of an infinite variety of shapes.
It seems it implausible to me that all of these things can be encoded as chemical signalling; it seems to require much more complex encoding of information, pattern matching, templates, and/or memory.
You might be interested in epigenetic inheritence. We do know that some epigenetic marks are passed down but its still very much unknown how much heritable information is encoded in epigenetics.
Transgenerational epigenetic inheritance: https://en.wikipedia.org/wiki/Transgenerational_epigenetic_i...
Methods of intergenerational transfer: DNA, RNA, bacteria, fungi, verbal latencies, explicit training
From https://x.com/westurner/status/1213675095513878528 :
> Does the fundamental limit of the amount of classical information encodable in the human genome (even with epigenetics & simultaneous encoding) imply a vast capacity for learning survival-beneficial patterns in very little time, with very few biasing priors?
> [Fundamental 'gbit' requirement 1: “No Simultaneous Encoding”:] if a gbit is used to perfectly encode one classical bit, it cannot simultaneously encode any further information. Two close variants of this are Zeilinger’s Principle (10) and Information Causality (11).
> Is there a proved presumption that genes only code in sequential combinations? Still overestimating the size of the powerset of all [totally-ordered] nonlocal combinations? Still trying to understand counterfactuals in re: constructor theory
Constructor theory: https://en.wikipedia.org/wiki/Constructor_theory
(quantum) Counterfactuals reasoning: https://www.google.com/search?q=(quantum)+*Counterfactual*+r... :
> Counterfactual reasoning is the process of considering events that could have happened but didn't.
Counterfactual definiteness: https://en.wikipedia.org/wiki/Counterfactual_definiteness
Quantum discord; there are multiple types of quantum entropy; entanglement and non-entanglement entropy: https://en.wikipedia.org/wiki/Quantum_discord
N-ary entanglement,
Collective unconscious > See also: https://en.wikipedia.org/wiki/Collective_unconscious
FWIU memories are stored in the cortex and also in the hippocampus; "Brain found to store three copies of every memory" (2024) https://news.ycombinator.com/item?id=41352124
... How do dogs know what not to eat in the wild?
> ... How do dogs know what not to eat in the wild?
They don’t.
Then how have any survived?
Observe the human response to dandelions; are they weeds or are they edible?
Do they have lobed leaves? What [neurons,] do mammals have to heuristically generalize according to visual and gustatory-olfactory features, and counterfactually which don't they have?
Or it's entirely learned, and then the coding for the substrate is still relevant
This topic is related to the work of Michael Levin’s lab, which I only recently found out about and have been digging into. They’ve released a bunch of papers, and Michael has given plenty of in-depth interviews available on YouTube. They’re looking at low-level structures like cells and asking “what can be learned/achieved by viewing these structures as being intelligent agents?” The problem of memory is tied intricately with intelligence, and examples of it at these low levels are found throughout their work.
The results of their experiments are surprising and intriguing: bringing cancer cells back into proper functioning, “anthrobots” self-assembling from throat tissue cells, malformed tadpoles becoming normal frogs, cells induced to make an eye by recruiting their neighbors…
An excerpt from the link below: Our main model system is morphogenesis: the ability of multicellular bodies to self-assemble, repair, and improvise novel solutions to anatomical goals. We ask questions about the mechanisms required to achieve robust, multiscale, adaptive order in vivo, and about the algorithms sufficient to reproduce this capacity in other substrates. One of our unique specialties is the study of developmental bioelectricity: ways in which all cells connect in somatic electrical networks that store, process, and act on information to control large-scale body structure. Our lab creates and employs tools to read and edit the bioelectric code that guides the proto-cognitive computations of the body, much as neuroscientists are learning to read and write the mental content of the brain.
> developmental bioelectricity
Probably already aware of methods like:
- Tissue Nanotransfection : https://en.wikipedia.org/wiki/Tissue_nanotransfection
- "Direct neuronal reprogramming by temporal identity factors" (2023) https://www.pnas.org/doi/10.1073/pnas.2122168120#abstract
FrontierMath: A benchmark for evaluating advanced mathematical reasoning in AI
ScholarlyArticle: "FrontierMath: A Benchmark for Evaluating Advanced Mathematical Reasoning in AI" (2024) https://arxiv.org/abs/2411.04872 .. https://epochai.org/frontiermath/the-benchmark :
> [Not even 2%]
> Abstract: We introduce FrontierMath, a benchmark of hundreds of original, exceptionally challenging mathematics problems crafted and vetted by expert mathematicians. The questions cover most major branches of modern mathematics -- from computationally intensive problems in number theory and real analysis to abstract questions in algebraic geometry and category theory. Solving a typical problem requires multiple hours of effort from a researcher in the relevant branch of mathematics, and for the upper end questions, multiple days. FrontierMath uses new, unpublished problems and automated verification to reliably evaluate models while minimizing risk of data contamination. Current state-of-the-art AI models solve under 2% of problems, revealing a vast gap between AI capabilities and the prowess of the mathematical community. As AI systems advance toward expert-level mathematical abilities, FrontierMath offers a rigorous testbed that quantifies their progress.
Additional AI math benchmarks:
- "TheoremQA: A Theorem-driven [STEM] Question Answering dataset" (2023) https://github.com/TIGER-AI-Lab/TheoremQA
Rethinking Code Refinement: Learning to Judge Code Efficiency
Re: costed opcodes in BPF, eWASM (like EVM) https://news.ycombinator.com/item?id=40475509 :
> Costed opcodes incentivize energy efficiency.
Aren't these all fitness scoring methods for what GA calls mutation, crossover, and selection?
And do the functions under test stably coverage on the same output even if not exactly functionally isomorphic? When are less costly approximate solutions sufficient?
Big O, Dynamic tracing and instrumentation, costed opcodes, OPS/kWHr/$
https://news.ycombinator.com/item?id=41333249 :
> codefuse-ai/Awesome-Code-LLM > Analysis of AI-Generated Code, Benchmarks: https://github.com/codefuse-ai/Awesome-Code-LLM#6-analysis-o...
Superconductors at Room Temperature? UIC's Hydrides Could Make It Possible
NewsArticle: "Superconductors at Room Temperature? UIC’s Groundbreaking Materials Could Make It Possible" (2024) https://scitechdaily.com/superconductors-at-room-temperature...
ScholarlyArticle: "Designing multicomponent hydrides with potential high T_c superconductivity" (2024) https://www.pnas.org/doi/10.1073/pnas.2413096121 :
> Abstract: [...] First-principles searches are impeded by the computational complexity of solving the Eliashberg equations for large, complex crystal structures. Here, we adopt a simplified approach using electronic indicators previously established to be correlated with superconductivity in hydrides. This is used to study complex hydride structures, which are predicted to exhibit promisingly high critical temperatures for superconductivity. In particular, we propose three classes of hydrides inspired by the Fm3m RH_3 structures that exhibit strong hydrogen network connectivity, as defined through the electron localization function. [...] These design principles and associated model structures provide flexibility to optimize both Tc and the structural stability of complex hydrides.
Confinement in the Transverse Field Ising Model on the Heavy Hex Lattice
"Confinement in the Transverse Field Ising model on the Heavy Hex lattice" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... :
> Abstract: Inspired by a recent quantum computing experiment [Y. Kim et al., Nature (London), 618, 500–5 (2023)], we study the emergence of confinement in the transverse field Ising model on a decorated hexagonal lattice. Using an infinite tensor network state optimized with belief propagation we show how a quench from a broken symmetry state leads to striking nonthermal behavior underpinned by persistent oscillations and saturation of the entanglement entropy. We explain this phenomenon by constructing a minimal model based on the confinement of elementary excitations. Our model is in excellent agreement with our numerical results. For quenches to larger values of the transverse field and/or from nonsymmetry broken states, our numerical results display the expected signatures of thermalization: a linear growth of entanglement entropy in time, propagation of correlations, and the saturation of observables to their thermal averages. These results provide a physical explanation for the unexpected classical simulability of the quantum dynamics.
"Evidence for the utility of quantum computing before fault tolerance" (2023) https://www.nature.com/articles/s41586-023-06096-3 :
> Abstract: [...] In the regime of strong entanglement, the quantum computer provides correct results for which leading classical approximations such as pure-state-based 1D (matrix product states, MPS) and 2D (isometric tensor network states, isoTNS) tensor network methods [2,3] break down. These experiments demonstrate a foundational tool for the realization of near-term quantum applications [4,5]
NewsArticle: "Computers Find Impossible Solution, Beating Quantum Tech at Own Game" (2024) https://www.sciencealert.com/computers-find-impossible-solut... :
> As successful as that test was, follow-up experiments have shown classical computers can do it too.
> According to the Flatiron Institute's Joseph Tindall and Dries Sels, this is possible because of a behavior called confinement, in which extremely stable states appear in the interconnected chaos of undecided particle properties, giving a classical computer something it can model.
Ask HN: Similar to the FoxDot live coding Pattern object?
FoxDot is an open source music live coding environment written in Python on SuperCollider.
FoxDot has a Pattern object for musical composition that has neat broadcasting rules for [musical] sequences. Are there similar or similarly-concise pattern objects in other languages or projects?
FoxDot pattern object docs: https://foxdot.org/docs/pattern-basics/
> FoxDot has a Pattern object for musical composition that has neat broadcasting rules for [musical] sequences. Are there similar or similarly-concise pattern objects in other languages or projects?
> FoxDot pattern object docs: https://foxdot.org/docs/pattern-basics/
FWIU the PitchGlitch fork of FoxDot is the most updated: https://gitlab.com/iShapeNoise/foxdot
Show HN: Draw.Audio – A musical sketchpad using the Web Audio API
Nice. This made me think of Conway's Game of Life so I made a little mashup of a tone matrix with GoL. https://matthewbilyeu.com/tone-of-life.html
From https://news.ycombinator.com/item?id=36927473 :
> Neat. Can it zoom out instead of wrapping around
Is it a 2d convolution?
> scipy.ndimage.convolve / scipy.fftpack.dct
This would be a cool instrument in BespokeSynth
Quantum Picturalism: Learning Quantum Theory in High School
"Quantum Picturalism: Learning Quantum Theory in High School - Lia Yeh" ZX-calculus https://youtube.com/watch?v=Je2I-Z74p7k&
"Quantum Picturalism: Learning Quantum Theory in High School" (2023) https://arxiv.org/abs/2312.03653 :
> Abstract: Quantum theory is often regarded as challenging to learn and teach, with advanced mathematical prerequisites ranging from complex numbers and probability theory to matrix multiplication, vector space algebra and symbolic manipulation within the Hilbert space formalism. It is traditionally considered an advanced undergraduate or graduate-level subject. In this work, we challenge the conventional view by proposing "Quantum Picturalism" as a new approach to teaching the fundamental concepts of quantum theory and computation. We establish the foundations and methodology for an ongoing educational experiment to investigate the question "From what age can students learn quantum theory if taught using a diagrammatic approach?". We anticipate that the primary benefit of leveraging such a diagrammatic approach, which is conceptually intuitive yet mathematically rigorous, will be eliminating some of the most daunting barriers to teaching and learning this subject while enabling young learners to reason proficiently about high-level problems. We posit that transitioning from symbolic presentations to pictorial ones will increase the appeal of STEM education, attracting more diverse audience.
Q12, QIS
ZX-calculus: https://en.wikipedia.org/wiki/ZX-calculus :
> Generators: The building blocks or generators of the ZX-calculus are graphical representations of specific states, unitary operators, linear isometries, and projections in the computational basis |0⟩, |1⟩ and the Hadamard-transformed basis
The subject matter is too advanced for all but a very tiny fraction of high school students (who should be tracked into customized education anyway). Adding pictures doesn't help.
Also they describe only one part of quantum theory. There is a ton of quantum chemistry and other things that are quantum, but don't deal with stuff like quantum teleportation, which frankly is fairly exotic for everybody but physics grad students and their advisors.
It could be an elective if it teaches its own prereqs, right?
Why not code instead?
OTOH a QIS learning sequence:
QuantumQ open source circuit puzzle game with no intro at all because amoebas, Cirq (SymPy) and QISkit tutorials with Python syntax, tequila abstractions and expectation values, and The Qubit Game to respect it the QC platform (though that's getting better all the times)
QuantumQ: https://github.com/ray-pH/quantumQ
Quantum logic gate: https://en.wikipedia.org/wiki/Quantum_logic_gate
Exercise: Implement a QuantumQ circuit puzzle level with Cirq or QISkit in a Jupyter notebook
DB48X: High Performance Scientific Calculator, Reinvented
Could this run on a TI or a NumWorks calculator?
RPN: Reverse Polish Notation: https://en.wikipedia.org/wiki/Reverse_Polish_notation
RPL: Reverse Polish LISP: https://en.wikipedia.org/wiki/RPL_(programming_language)
In theory yes. In practice, the TI 83/84 are probably not beefy enough, and the NumWorks has a keyboard that makes the endeavour really complicated.
JupyterLite includes SymPy CAS, Pandas, NumPy, and SciPy in WASM (Pyodide) in a browser tab; but it's not yet easy to save notebooks to git or gdrive out of the JupyterLite build. awesome-jupyter > Hosted notebooks; Cocalc, BinderHub, G Colab, ml-tooling/best-of-jupyter
It's also possible to install JupyterLab etc on Android with termux: `pkg install proot gitea python; pip install jupyterlab pandas` iirc
But that doesn't limit the user to a non-QWERTY A-Z keyboard for the College Board.
jupyter notebooks can be (partially auto-) graded in containers with Otter-Grader or nbgrader; and there's nbgitpuller or jupyterlab-git for revision control or source control in (applied) mathematics
The BPF instruction set architecture is now RFC 9669
So… eBPF is now BPF and old BPF is now cBPF or classic bpf.
And there's wasm-bpf: https://github.com/eunomia-bpf/wasm-bpf#how-it-works
But should (browser) WASM processes cross the kernel boundary for BPF performance?
FWIW EVM/eWASM opcodes have a cost in gas/particles.
Do you think that BPF opcodes should be costed, too? Why or why not?
Are you... asking yourself? What did you decide?
Costing instructions leads to efficiency metrics, which makes it possible to incentivize efficiency.
BPF instructions could also each have an abstract relative cost with or without real value.
BPF in WASM (unfortunately without the kernel performance advantages or possible side channels) or the fwiu now-defunct eWASM might be an easier place to test the value of costed opcodes.
The [e]BPF verifier does not yet rewrite according to opcode costs of candidate programs.
"A look inside the BPF verifier [lwn]" https://news.ycombinator.com/item?id=41135478
Physicists spot quantum tornadoes twirling in a ‘supersolid’
The actual paper is hidden behind a Nature paywall. There is a preprint on arxiv here https://arxiv.org/abs/2403.18510
"Observation of vortices in a dipolar supersolid" (2024) https://arxiv.org/abs/2403.18510
- "Topological gauge theory of vortices in type-III superconductors" (2024) https://news.ycombinator.com/item?id=41803662
- "Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691 ( https://news.ycombinator.com/item?id=41442489 )
- Aren't there electron vortices in rhombohedral trilayer graphene? https://news.ycombinator.com/item?id=40919385
- And isn't edge current chirality vortical, too? https://news.ycombinator.com/item?id=41765192
> isn't edge current chirality vortical, too?
- /? valley vortex edge modes chiral: https://www.google.com/search?q=Valley+vortex+edge+modes+chi...
- "Active particle manipulation with valley vortex phononic crystal" (2024) https://www.sciencedirect.com/science/article/abs/pii/S03759... :
> The valley vortex state presents an emerging domain in condensed matter physics. As a new information carrier with orbital angular momentum, it demonstrates remarkable ability for reliable, non-contact particle manipulation. In this paper, an acoustic system is constructed to exhibit the acoustic valley state, and traps particles using acoustic radiation force generated by the acoustic valley. Considering temperature effect on the acoustic system, the band structures for temperature fluctuations are discussed. An active controlled method for manipulating particle movement is proposed. Numerical simulations confirm that the particles can be captured by acoustic valley state, and can be repositioned through alterations in temperature, while maintaining a constant excitation frequency.
2D neural geometry underpins hierarchical organization of seq in working memory
ScholarlyArticle: "Two-dimensional neural geometry underpins hierarchical organization of sequence in human working memory" (2024) https://www.nature.com/articles/s41562-024-02047-8
What about representational drift; are the observed structures of the learned sequences stable over space and time?
From "Discovery of Memory "Glue" Explains Lifelong Recall: KIBRA Protein, PKMzeta" (2024) https://news.ycombinator.com/item?id=41921715 :
> "Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
>> Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks—a phenomenon called representational drift
Genetic repair via CRISPR can inadvertently introduce other defects
ScholarlyArticle: "Gene editing of NCF1 loci is associated with homologous recombination and chromosomal rearrangements" (2024) https://www.nature.com/articles/s42003-024-06959-z
Does it matter which CRISPR or CRISPR-like method is applied; or is there in general a dynamic response to gene editing in DNA/RNA?
The article covers this and I think the title is a bit too general. It is a byproduct of how CRISPR works as it targets a specific sequence. In this case the sequence is also present in areas that were non-targeted. Essentially, the sequence was not unique so the process impacted other areas in unintended ways.
> Here we evaluated diverse corrective CRISPR-based editing approaches that target the ΔGT mutation in NCF1, with the goal of evaluating post-editing genomic outcomes. These approaches include Cas9 ribonucleoprotein (RNP) delivery with single-stranded oligodeoxynucleotide (ssODN) repair templates [11,12], dead Cas9 (dCas9) shielding of NCF113, multiplex single-strand nicks using Cas9 nickase (nCas9) [14,15], or staggered double-strand breaks (DSBs) using Cas12a16.
- "A NICER approach to genome editing with less mutations than CRISPR" (2023) [cas13d] https://news.ycombinator.com/item?id=37698196 :
"Prediction of on-target and off-target activity of CRISPR–Cas13d guide RNAs using deep learning" (2023) https://www.nature.com/articles/s41587-023-01830-8
Defibrillation devices save lives using 1k times less electricity
The article is about internal defibrillators. External ones are still the same as (good grief) 35 years ago (well maybe down from 300J to 200J). The only change I've noticed is moving from a gel for the pads to a gel pad (which feel like a frog, chuck one in your partners bed and let them find it!) which reduced the possibility of burning and odd smells in your ambulance. Fortunately my sense of smell wasn't great and often had a partner who smoked (and was allowed to in the olden days) in the ambulance to dull it. You kids don't know how it was having to actually manually read the trace instead of all this new-fangled automation that guides you through it.
"New defib placement increases chance of surviving heart attack by 264%" (2024) https://newatlas.com/medical/defibrillator-pads-anterior-pos... :
> Placing [AED,] defibrillator pads on the chest and back, rather than the usual method of putting two on the chest, increases the odds of surviving an out-of-hospital cardiac arrest by more than two-and-a-half times, according to a new study.
"Initial Defibrillator Pad Position and Outcomes for Shockable Out-of-Hospital Cardiac Arrest" (2024) https://jamanetwork.com/journals/jamanetworkopen/fullarticle...
I know article authors don't write their own headlines, but for all who read this: it's about out-of-hospital cardiac arrest, which can be caused by a heart attack, but is in no way the most likely presentation of a heart attack.
The AED should measure the rhythms before applying defibrillation.
An emergency AED operator doesn't need to make that distinction (doesn't need to differentially diagnose a HA as a CA) , do they?
You just put the AED pads on the patient and push the button if they're having a heart attack.
(and stand clear such that you are not a conductor to the ground or between the pads)
Grounding isn't an issue, as AED's are battery-powered once they are pulled off the wall.
But they do pump out a lot of juice. If you're touching the patient, it will HURT.
One can certainly shock onesself with a battery-powered car starter jump pack, particularly if one is a conductor to the ground or the circuit connects through the heart (which it sounds like anterior-posterior helps with).
Potential Energy charge in a battery wants to return to the ground just the same.
Oh, yeah, you can shock yourself very hard. But between two battery contacts, there is no ground. You can touch either one with no problem. It's when you touch both that you get the blast.
There's no return circuit even with your feet in salt water if you touch only one post of a battery.
I don't think that electron identity is relevant to whether there's e.g. arc discharge between + and - charges of sufficient strength?
Connecting just 1.5V AA battery contacts with steel wool causes fire. But doesn't just connecting the positive terminal of a battery to the ground result in current, regardless of the negative terminal of the battery?
(FWIU that's basically why we're advised to wear a grounding strap when operating on electronics with or without discharged capacitors)
Grounding straps prevent static charges from building up on you. A battery doesn’t really have a ground. The body of cars is hooked to the negative pole of the battery, so it’s called the “ground” of the car, but that’s for corrosion reasons.
Evaluating the world model implicit in a generative model
An LLM necessarily has to create some sort of internal "model" / representations pursuant to its "predict next word" training goal, given the depth and sophistication of context recognition needed to to well. This isn't an N-gram model restricted to just looking at surface word sequences.
However, the question should be what sort of internal "model" has it built? It seems fashionable to refer to this as a "world model", but IMO this isn't really appropriate, and certainly it's going to be quite different to the predictive representations that any animal that interacts with the world, and learns from those interactions, will have built.
The thing is that an LLM is an auto-regressive model - it is trying to predict continuations of training set samples solely based on word sequences, and is not privy to the world that is actually being described by those word sequences. It can't model the generative process of the humans who created those training set samples because that generative process has different inputs - sensory ones (in addition to auto-regressive ones).
The "world model" of a human, or any other animal, is built pursuant to predicting the environment, but not in a purely passive way (such as a multi-modal LLM predicting next frame in a video). The animal is primarily concerned with predicting the outcomes of it's interactions with the environment, driven by the evolutionary pressure to learn to act in way that maximizes survival and proliferation of its DNA. This is the nature of a real "world model" - it's modelling the world (as perceived thru sensory inputs) as a dynamical process reacting to the actions of the animal. This is very different to the passive "context patterns" learnt by an LLM that are merely predicting auto-regressive continuations (whether just words, or multi-modal video frames/etc).
> The "world model" of a human, or any other animal, is built pursuant to predicting the environment
What do you make of Immanuel Kant's claim that all thinking has as a basis the presumption of the "Categories"--fundamental concepts like quantity, quality and causality[1]. Do LLMs need to develop a deep understanding of these?
Embodied cognition implies that we understand our world in terms of embodied metaphor "categories".
LLMs don't reason, they emulate. RLHF could cause an LLM to discard text that doesn't look like reasoning according to the words in the response, but that's still not reasoning or inference.
"LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285
Conceptual metaphor: https://en.wikipedia.org/wiki/Conceptual_metaphor
Embodied cognition: https://en.wikipedia.org/wiki/Embodied_cognition
Clean language: https://en.wikipedia.org/wiki/Clean_language
Given human embodied cognition as the basis for LLM training data, there are bound to be weird outputs about bodies from robot LLMs.
[wrong link] Parental leave at early stage startups
the link keeps defaulting to jqueryui.com. here is the blog post: https://www.jasgarcha.com/blog/on-parental-leave-at-early-st...
/? list of companies by paternity policy; parental leave https://www.google.com/search?q=list+of+companies+by+paterni...
/? paternity leave: https://hn.algolia.com/?q=paternity+leave
Parental leave in the United States: https://en.wikipedia.org/wiki/Parental_leave_in_the_United_S... :
> The United States is the only industrialized nation that does not offer paid parental leave. [109] Of countries in the United Nations only the United States, Suriname, Papua New Guinea, and a few Pacific island countries do not provide paid time off for new parents either through the employer's benefits or government paid maternity and parental benefits.[110] A study of 168 countries found that 163 guarantee paid leave to women, and 45 guarantee paid paternity leave. [111]
- "How my Universal Child Care and Early Learning plan works" https://elizabethwarren.com/plans/universal-child-care
Using Ghidra and Python to reverse engineer Ecco the Dolphin
When the original Ecco came out on the Megadrive (Genesis), I spent all my hard-earned money to buy it. That game is obscenely hard. I got frustrated, so I sat down for the afternoon with a pen and paper and somehow managed to decode the password system. I teleported to the final level and completed it the next day.
Then I was wracked with guilt about spending all my money on a game I completed in two days.
Philosophically, I would argue that you did not complete the game.
You skipped several levels and saw only some percentage of the intended content, gameplay, story, etc. Games in general, and Ecco the Dolphin is no exception, are very much about the journey and not just the destination. You missed out on themes & experiences like isolation, making friends with those outside of your in-group, conservation, time travel, communing with dinosaurs and, of course, space travel.
So, you really shouldn't have felt so guilty.
There are plenty of people who would argue that getting to end credits means beating the game.
You can however say that skipping levels also skips the story, so they did not finish the story.
Ecco the Dolphin (video game) > Plot: https://en.wikipedia.org/wiki/Ecco_the_Dolphin_(video_game)
Robustly learning Hamiltonian dynamics of a superconducting quantum processor
"Robustly learning the Hamiltonian dynamics of a superconducting quantum processor" (2024) https://www.nature.com/articles/s41467-024-52629-3 :
> Abstract: Precise means of characterizing analog quantum simulators are key to developing quantum simulators capable of beyond-classical computations. Here, we precisely estimate the free Hamiltonian parameters of a superconducting-qubit analog quantum simulator from measured time-series data on up to 14 qubits. To achieve this, we develop a scalable Hamiltonian learning algorithm that is robust against state-preparation and measurement (SPAM) errors and yields tomographic information about those SPAM errors. The key subroutines are a novel super-resolution technique for frequency extraction from matrix time-series, tensorESPRIT, and constrained manifold optimization. Our learning results verify the Hamiltonian dynamics on a Sycamore processor up to sub-MHz accuracy, and allow us to construct a spatial implementation error map for a grid of 27 qubits. Our results constitute an accurate implementation of a dynamical quantum simulation that is precisely characterized using a new diagnostic toolkit for understanding, calibrating, and improving analog quantum processors.
Functional ultrasound through the skull
This is fun, and the modeling is cool for sure, but it's well known that ultrasound can be used with surgical precision in the human brain.
Focused ultrasound is already used for non-invasive neuromodulation. Raag Airan's lab at Stanford does this for example using ultrasound uncaging.
https://www.frontiersin.org/journals/neuroscience/articles/1...
https://www.sciencedirect.com/science/article/pii/S089662731...
Also see the work by Urvi Vyas, eg
https://pubmed.ncbi.nlm.nih.gov/27587047/
I don't mean to discount the cool imaging-related reconstruction of a point spread function, but rather to say that ultrasound attenuation through the skull an soft tissue has already been well characterized and it's not a surprise that it is viable to pass through.
OpenWater wiki > Neuromodulation: https://wiki.openwater.health/index.php/Neuromodulation
OpenwaterHealth/opw_neuromod_sw: https://github.com/OpenwaterHealth/opw_neuromod_sw :
> OpenWater's Transcranial Focused Ultrasound Platform. open-LIFU is an ultrasound platform designed to help researchers transmit focused ultrasound beams into subject’s brains, so that those researchers can learn more about how different types of ultrasound beams interact with the neurons in the brain. Unlike other focused ultrasound systems which are aimed only by their placement on the head, open-LIFU uses an array to precisely steer the ultrasound focus to the target location, while its wearable small size allows transmission through the forehead into a precise spot location in the brain even while the patient is moving.
FWIU NIRS is sufficient for most nontherepeautic diagnostics though. (Non-optogenetically, infrared light stimulates neuronal growth, and blue and green lights inhibit neuronal growth)
Instabilities in Black Holes Could Rewrite Spacetime Theories
NewsArticle: "Defying Einstein: Hidden Instabilities in Black Holes Could Rewrite Spacetime Theories" (2024) https://scitechdaily.com/defying-einstein-hidden-instabiliti...
ScholarlyArticle: "Mass Inflation without Cauchy Horizons" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
OTOH, there's a gravity to photon conversion which should likely affect mass conditions in black holes.
And also, again, Fedi's SQR Superfluid Quantum Relativity (.it), FWIU: also rejects a hard singularity boundary, describes curl and vorticity in fluids (with Gross-Pitaevskii,), and rejects antimatter.
Intel Releases x86-SIMD-sort 6.0 for 10x faster AVX2/AVX-512 Sorting
intel/x86-simd-sort v6.0: https://github.com/intel/x86-simd-sort/releases/tag/v6.0 :
> Release 6.0 introduces several key enhancements to our software. The update brings new APIs for partial sorting of key-value pairs, along with comprehensive AVX2 support across all routines. Additionally, PyTorch now leverages AVX-512 and AVX2 key-value sort routines to boost the performance of `torch.sort` and `torch.argsort` functions by up-to 10x (see pytorch/pytorch#127936).
"Adds support for accelerated sorting with x86-simd-sort" https://github.com/pytorch/pytorch/pull/127936
Traceroute Isn't Real
traceroute is a useful tool for (amongst other things) determining where a system is physically. It doesn't always give you enough information.
Traceroute to news.ycombinator.com:
3 96.108.68.141 (po-200-xar01.maynard.ma.boston.comcast.net) 12.105ms 11.724ms 11.931ms
4 96.108.68.141 (po-200-xar01.maynard.ma.boston.comcast.net) 13.011ms 10.727ms 19.861ms
5 162.151.52.34 (be-501-ar01.needham.ma.boston.comcast.net) 13.988ms 14.721ms 12.921ms
6 162.151.52.34 (be-501-ar01.needham.ma.boston.comcast.net) 14.999ms 16.688ms 12.997ms
7 4.69.146.65 (ae0.11.bar1.SanDiego1.net.lumen.tech) 76.044ms 79.624ms 78.017ms
8 4.69.146.65 (ae0.11.bar1.SanDiego1.net.lumen.tech) 83.962ms 108.675ms 78.987ms
9 \* \* \*
I can conclude that the server is probably on the west coast -- maybe San Diego.I recall using this during a sales pitch by some IP Geolocation company that were very proud of their technology. The example that they used, they claimed was in Morristown, NJ. A quick traceroute (from Massachusetts) revealed that the IP was somewhere in the UK as the last hop was close to Heathrow Airport. We did not purchase their solution!
This fails in a lot of different ways, not the least of which is anycast. The most common way is probably dns-based geographic steering.
Can you tell me where 8.8.8.8 is?
8.8.8.8 is in many places. It's hard to get a list of exactly where[1], but the more places you can traceroute from, the more places you can add to your list. For 8.8.8.8 in particular, if you've got probes in major PoPs, you can probably get response times of 1 ms or less and be pretty sure that Google has a presence there. Of course, if you get a longer response time, they may have a presence there but your network may not have their local advertisement and your query routes elsewhere; or maybe the response from Google is being routed through another location. Traceroute might help debug a longer route to Google, but it'll be hard to debug the return.
For dns steering to unicast ips, you can get a reasonable idea of where the ips you can see are, although you'll need to make dns requests from different locations to see more of the available dns answers.
[1] Unless you're an insider, or maybe Google publishes a list somewhere. Their peeringdb listing of locations is probably a good start, though.
$ dig @8.8.8.8 +nsid news.ycombinator.com.
; NSID: 67 70 64 6e 73 2d 61 6d 73 ("gpdns-ams")
news.ycombinator.com. 1 IN A 209.216.230.207
Repeating the same query returned: ; NSID: 67 70 64 6e 73 2d 67 72 71 ("gpdns-grq")
A few other resolvers implement this as well, e.g. 1.1.1.1 and 9.9.9.9: 1.1.1.1: ; NSID: 35 32 31 6d 32 37 34 ("521m274")
9.9.9.9: ; NSID: 72 65 73 31 32 31 2e 61 6d 73 2e 72 72 64 6e 73 2e 70 63 68 2e 6e 65 74 ("res121.ams.rrdns.pch.net")
with, as you can see, various degrees of human-readable information on the actual replying resolver.(more DNS resolver introspection tricks can be found with DNSDiag https://dnsdiag.org/ )
DNSSEC:
delv @8.8.8.8 +dnssec news.ycombinator.com
delv @8.8.8.8 +dnssec AAAA news.ycombinator.com
If you run traceroute multiple times - for example throughout the day - and overlay the results, you will have a better idea of the routes that [ICMP or TCP or UDP] packets are taking between src and dest.
It is trivial to add or hide hops to traceroute by just changing the ttl on certain or all packets.
mtr does traceroute.
There's a 3d traceroute in scapy.
Scapy docs > TCP traceroute: https://scapy.readthedocs.io/en/latest/usage.html#tcp-tracer... :
> If you have VPython installed, you also can have a 3D representation of the traceroute.
Useful built-in macOS command-line utilities
upgrade_mac.sh: https://github.com/westurner/dotfiles/blob/develop/scripts/u... :
upgrade_macos() {
softwareupdate --list
softwareupdate --download
softwareupdate --install --all --restart
}
The first wooden satellite launched into space
/? What are the temperature extremes on the sun side of a satellite https://www.google.com/search?q=What+are+the+temperature+ext... ... 250° F
"Low Temperature Ignition of Wood" https://www.warrenforensics.com/wp-content/uploads/2016/06/P... :
> ignition of wood will occur under exposure temperatures of as little as 256ºF for periods of 12 to 16 hours per day in as little as 623 days or approximately 21 months.
It's also possible to encounter focused solar radiation due to various lensing effects.
Starlite: https://en.wikipedia.org/wiki/Starlite :
> Starlite was also claimed to have been able to withstand a laser beam that could produce a temperature of 10,000 °C
And, hemp is far more tensile than wood; which probably matters given the hazard of space debris.
Wood does not ignite in space because there's no air in space.
There's non-oxidative types of decomposition, if you can reach high enough temperatures:
Discussion (44 points, 42 comments) https://news.ycombinator.com/item?id=42051687
/? wood satellite https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Ask HN: What happened to the No Paid Prioritization net neutrality rule?
Net neutrality law: https://en.wikipedia.org/wiki/Net_neutrality_law
Net neutrality in the United States > Regulatory History: https://en.wikipedia.org/wiki/Net_neutrality_in_the_United_S...
How does "No Paid Prioritization" compare to "Equal Time", and which groups have opposed and supported which?
What is "equal time"? I've never heard of an "equal time" net neutrality rule.
The only FCC "Equal Time" rule I'm familiar with is with respect to political party representation during election coverage.
FWIU the Equal-time rule is distinct from the cancelled "Fairness doctrine", and the Equal Time rule applies only to broadcast radio and tv.
The "No Paid Prioritization" rule applies to all Title II Common Carrier Internet Service Providers but not to Information Service Providers.
The 2015 FCC net neutrality rules for ISPs were cancelled by a Verizon lobbyist until April 2024, when the FCC reinstituted the net neutrality rules; "No blocking", "No throttling", "No Paid Prioritization".
ISPs are again considered Title II Common Carriers.
But now the court has stayed the rules due to corporate lobbyists.
So, the "No Paid Prioritization" rule does not apply to the Internet ATM because corporate lobbyists.
Equal-time rule: https://en.wikipedia.org/wiki/Equal-time_rule
(Comcast in Philadelphia acquired NBC from GE in 2010; and a bunch of shows died, but not SNL. Comcast is an ISP and also owns NBC and this SNL. Tom Wheeler - who got the 2015 Open Internet Rules passed - was an attorney for Comcast prior to the FCC post.
Pai was a Verizon lobbyist prior to the FCC post. Pai worked to cancel the "No blocking", "No throttling", and "No Paid Prioritization" rules that protected the Internet.)
There is no equal time if there is paid prioritization; to do paid prioritization the ISP prioritizes packets according to payola (which is already illegal in the music industry, for example).
From https://en.wikipedia.org/wiki/Payola :
> Payola, in the music industry, is the name given to the illegal practice of paying a commercial radio station to play a song without the station disclosing the payment
Are there disclosure rules for Paid Prioritization by ISPs? Are my school videos slower because adult sites just pay the man?
Equal time has nothing to do with net neutrality…
Do you know what a priority queue is compared to FIFO or LIFO? (Edit: Network scheduler: https://en.wikipedia.org/wiki/Network_scheduler )
How would you implement each with code?
I think if you support the values of Equal Time, you should also support No Paid Prioritization.
If there is paid prioritization, there is not equal time.
Equal time is about campaigns having equal amount of airtime on broadcast television/radio. It says nothing about the internet or network packet prioritization.
Revealing the superconducting limit of twisted bilayer graphene
"Low-Energy Optical Sum Rule in Moiré Graphene" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... https://arxiv.org/abs/2312.03819
But what about rhombohedral trilayer graphene?
"Revealing the complex phases of rhombohedral trilayer graphene" (2024) https://www.nature.com/articles/s41567-024-02561-6 .. https://news.ycombinator.com/item?id=40919269
SharpNEAT – evolving NN topologies and weights with a genetic algorithm
/? parameter-free network: https://scholar.google.com/scholar?q=Parameter-free%20networ...
"Ask HN: Parameter-free neural network models: Limits, Challenges, Opportunities?" (2024) https://news.ycombinator.com/item?id=41794249 :
> Why should experts bias NN architectural parameters if there are "Parameter-free" neural network graphical models?
Do these GAs for (hyper)parameter estimation converge given different random seeds?
A React Renderer for Gnome JavaScript
From https://github.com/react-gjs/renderer#elements-of-gtk3 :
> List of all GTK3 Widgets provided as JSX Components by this renderer:
I couldn't find a free, no-login, no-AI checklist app–so I built one
TodoMVC/examples has 40+ open source todo apps written with various popular and obscure frameworks: https://github.com/tastejs/todomvc/tree/master/examples
Eighty Years of the Finite Element Method (2022)
From "Chaos researchers can now predict perilous points of no return" (2022) https://news.ycombinator.com/item?id=32862414 :
> FEM: Finite Element Method: https://en.wikipedia.org/wiki/Finite_element_method
>> FEM: Finite Element Method (for ~solving coupled PDEs (Partial Differential Equations))
>> FEA: Finite Element Analysis (applied FEM)
> awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea
And also, "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171
Direct Sockets API in Chrome 131
From "Chrome 130: Direct Sockets API" (2024-09) https://news.ycombinator.com/item?id=41418718 :
> I can understand FF's position on Direct Sockets [...] Without support for Direct Sockets in Firefox, developers have JSONP, HTTP, WebSockets, and WebRTC.
> Typically today, a user must agree to install a package that uses L3 sockets before they're using sockets other than DNS, HTTP, and mDNS. HTTP Signed Exchanges is one way to sign webapps.
But HTTP Signed Exchanges is cancelled, so arbitrary code with sockets if one ad network?
...
> Mozilla's position is that Direct Sockets would be unsafe and inconsiderate given existing cross-origin expectations FWIU: https://github.com/mozilla/standards-positions/issues/431
> Direct Sockets API > Permissions Policy: https://wicg.github.io/direct-sockets/#permissions-policy
> docs/explainer.md >> Security Considerations : https://github.com/WICG/direct-sockets/blob/main/docs/explai...
Using Large Language Models to Catch Vulnerabilities
awesome-code-llm > Vulnerability Detection, Program Proof,: https://github.com/codefuse-ai/Awesome-Code-LLM#vulnerabilit...
Half of Young Voters Say They've Lied about Which Candidates They Support
In the presented statistics, it's not so much about party as it is about age. Which tracks with my experience that young people are much more conflict averse.
Also, interesting that men are twice as likely to lie as women. Which goes against accepted logic that women would be more likely to conform socially.
> Which goes against accepted logic that women would be more likely to conform socially.
Citation needed. Where is that coming from?
Heels, yucky lipstick
Conformity > Specific predictors: https://en.wikipedia.org/wiki/Conformity#Specific_predictors
Using SQLite as storage for web server static content
I did some experiments around this idea a few years ago, partly inspired by the "35% Faster Than The Filesystem" article: https://www.sqlite.org/fasterthanfs.html
Here are the notes I made at the time: https://simonwillison.net/2020/Jul/30/fun-binary-data-and-sq...
I built https://datasette.io/plugins/datasette-media as a plugin for serving static files from SQLite via Datasette and it works fine, but honestly I've not used it much since I built it.
A related concept is using SQLite to serve map tiles - this plugin https://datasette.io/plugins/datasette-tiles does that, using the MBTiles format which it turns out is a SQLite database full of PNGs.
If you want to experiment with SQLite to serve files you may find my "sqlite-utils insert-files" CLI tool useful for bootstrapping the database: https://sqlite-utils.datasette.io/en/stable/cli.html#inserti...
Is there a way to sendfile from a SQLite database like xsendfile?
requests-cache caches requests in SQLite by (date,URI) IIRC. https://github.com/requests-cache/requests-cache/blob/main/r...
/? pyfilesystem SQLite: https://www.google.com/search?q=pyfilesystem+sqlite
/? sendfile mmap SQLite: https://www.google.com/search?q=sendfile+mmap+sqlite
https://github.com/adamobeng/wddbfs :
> webdavfs provider which can read the contents of sqlite databases
There's probably a good way to implement a filesystem with Unix file permissions and xattrs extended file attribute permissions atop SQLite?
Would SQLite be faster or more convenient than e.g. ngx_http_memcached_module.c ? Does SQLite have per-cell ACLs either?
Not all you asked but Sqlite archive can be mounted using fuse, https://sqlite.org/sqlar/doc/trunk/README.md
FUSE does local filesystems, and there are also HTTP remote VFS for sqlite.
sqlite-wasm-http is based on phiresky/sql.js-httpvfs, "a read-only HTTP-Range-request based virtual file system for SQLite. It allows hosting an SQLite database on a static file hoster and querying that database from the browser without fully downloading it." "but uses the new official SQLite WASM distribution." [1][2]
[2] sql.js-httpvfs: https://github.com/phiresky/sql.js-httpvfs
[1] sqlite-wasm-http: https://www.npmjs.com/package/sqlite-wasm-http
[3] sqlite3vfshttp: https://github.com/psanford/sqlite3vfshttp
HTTP Range requests: https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requ...
Hear the sounds of Earth's magnetic field from 41,000 years ago
If we look into a reflection in space 0.5 light years away from Earth, we should see 1 year ago (given zero motion relative to earth, and 0.5+0.5=1 year.)
Where there is relative motion with photons at least isn't there a Redshift and the Doppler effect (edit: and a different parametric or chaotic convolution over large distances without e.g. helical polarization to shield the signal)
Magnetic fields reportedly have a maximum propagation speed that is also the speed of light c in a vacuum.
So, to recall Earth's magnetic field from 41,000 years ago with such a method would presumably require a reflection (41,000/2 = 20,500) light years away
Python PGP proposal poses packaging puzzles
This leaves out the important context that key verification for these packages isn't functional.
In the last 3 years, about 50k signatures had been uploaded to PyPI by 1069 unique keys. Of those 1069 unique keys, about 30% of them were not discoverable on major public keyservers, making it difficult or impossible to meaningfully verify those signatures. Of the remaining 71%, nearly half of them were unable to be meaningfully verified at the time of the audit (2023-05-19) 2.
More, recently, on this thread:
This is related to the distribution of CPython itself, the key verification for those artifacts does work and has worked forever. The packaging referred to by the article is about packaging Python itself by upstream distributions.
Python packages developed by third party developers and uploaded to PyPi are indeed not verifiable due to the key issues you mentioned, and is a minor note in the article.
W3C DIDs are verifiable e.g with blockchain-certificates/cert-verifier-js and blockchain-certificates/cert-verifier (Python).
If PyPI is not a keyserver, if it only hosts the attestations and checks checksums, can it fully solve for [Python] software supply chain security?
A table comparing the various known solutions might be good; including md5, sha3, GPG .ASC signatures, TUF, Uptane, Sigstore (Cosign + Rekor), PyPI w/w/o attestations, VC Verifiable Credentials, and Blockcerts (Verifiable Credentials (DIDs))
Where Did the Term '86' Come From?
86 (term) since the 1930s: https://en.wikipedia.org/wiki/86_(term)
Intel 8086 (1978) ... X86 (and AMD64/X86-64)
Motorola 680x0 / m68k (6800: 1974, 68000: 1979) https://en.wikipedia.org/wiki/Motorola_6800 https://en.wikipedia.org/wiki/Motorola_68000 .. "Motorola's pioneering 8-bit 6800: Origins and architecture" https://news.ycombinator.com/item?id=38616591
X86: https://en.wikipedia.org/wiki/X86
FWIW, "Smoky and the Bandit" was released in 1977 and it is about Prohibition in the US shortly after the ending of the Bretton-Woods gold-reserve system (1971) and then the CSA (1971) (Which failed, and caused violent organized crime, and was withdrawn by Constitutional amendment in part due to Al Smith).
How the human brain contends with the strangeness of zero
There was nothing strange about zero for our distant ancestors.
There are only two things that have been understood much later about zero, that it is a quantity that in many circumstances should be treated in the same way as any other numbers and that it requires a special symbol in order to implement a positional system for writing numbers.
The concept of zero itself had already been known many thousands of years before the invention of the positional system for writing numbers.
All the recorded ancient languages were able to answer to the question "How many sheep do you see there?" not only with words meaning "one", "two", "three", "four", "few", "many" and so on, but also with a word meaning "none". Similarly for the answers to a question like "How much barley do you have?".
Nevertheless, the concept of the quantity "zero" must have been understood later than the concepts of "one", "two", "many" and of negation, because in most languages the words similar in meaning to "none", "nobody", "nothing", "null" are derived from negation words together with words meaning "one" or denoting small or indefinite quantities.
Because the first few numbers, especially 0, 1 and 2 have many distinctive properties in comparison with bigger numbers, not only 0 was initially perceived as being in a class somewhat separate from other numbers, but usually also 1 and 2 and sometimes also 3 or even 4.
In many old languages the grammatical behavior of the first few numbers can be different between themselves and also quite different from that of the bigger numbers, which behave more or less in the same way, consistent with the expectation that the big numbers have been added later to the language and at a time when they were perceived as a uniform category.
There are other factual problems with this article. Most jarring is that it repeats this old incorrect information:
> Inside the Chaturbhuj Temple in India (left), a wall inscription features the oldest known instance of the digit zero, dated to 876 CE (right). It is part of the number 270.
This is the real oldest known zero:
> Because we know that the Çaka dynasty began in AD 78, we can date the artifact exactly: to the year AD 683. This makes the “0” in “605” the oldest zero ever found of our base-10, “Hindu-Arabic” number system.
> But the Cambodian stone inscription bears the first known zero within the system that evolved into the numbers we use today.
Maybe the Khmers came up with it or maybe they got the idea from India. Either way, we should set the facts straight.
0 (number) > Classical antiquity; https://en.wikipedia.org/wiki/0#Classical_antiquity :
> By AD 150, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for zero ( — ° ) [25][26] in his work on mathematical astronomy called the Syntaxis Mathematica, also known as the Almagest. [27] This Hellenistic zero was perhaps the earliest documented use of a numeral representing zero in the Old World. [28]
Astronomers discover complex carbon molecules in interstellar space
> This discovery also links to another important finding of the last decade – the first chiral molecule in the interstellar medium, propylene oxide. We need chiral molecules to make the evolution of simple lifeforms work on the surface of the early Earth.
It would be really amazing if we were able to know if both chiralities are equally represented in the space. Apart from the life itself it is astonishingly interesting how life evolved to be monochiral.
Does propylene oxide demonstrate a "propeller effect" like some other handed chiral molecules?
From https://news.ycombinator.com/item?id=41873531 :
> "Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
Electro-agriculture: Revolutionizing farming for a sustainable future
For those wondering what electro-ag is: convert CO2 into acetate—a carbon-rich compound that can fuel crop growth without sunlight.
"Electro-ag"
"Electronic soil boosts crop growth" (2023) https://news.ycombinator.com/item?id=38767561#38768499 :
> Electroculture
> "Electrical currents associated with arbuscular mycorrhizal interactions" (1995) https://scholar.google.com/scholar?cites=3517382204909176031...
Electrotropism: https://en.wikipedia.org/wiki/Electrotropism
What the op article suggests is using GMO plants that no longer use photosynthesis but instead drive their metabolism based on products derived from CO2 electrolysis and simple chemical man made compounds.
Plant-made plants, like industrial chemical plant-made.
Is that more efficient than [solar-powered] industrial production processes that synthesize directly from CO2? https://news.ycombinator.com/item?id=40914350#40959068
What about topsoil depletion and compost production?
It's 4x more efficient at solar to food production than regular plants with the potential to get a 10x improvement.
What are the downsides?
I read that it was [Vitamin E] acetate in carts that was causing EVALI lung conditions?
What nutrients does it require synthetic or natural production of, and how sustainable are those processes?
Have the given organisms co-evolved with earth ecology for millions of billions of years?
Acetate > Biology: https://en.wikipedia.org/wiki/Acetate
PV modules don't grow themselves. Also, the CO2 has to be captured somehow. Plants double as CO2 collectors.
An upside is vastly lower water requirements, since no transpiration.
Hmm, this article is about using electricity and CO2/H2O to make chemical feedstocks that would grow plants. The plants would "eat" the chemical feedstocks, instead of "eating" light.
It's not about using electric fields to direct plant growth; that's a different thing
Pygfx
pygfx/pygfx: https://github.com/pygfx/pygfx :
> Pygfx (pronounced “py-graphics”) is built on wgpu, enabling superior performance and reliability compared to OpenGL-based solutions.
pygfx/wgpu-py: https://github.com/pygfx/wgpu-py/ :
> A Python implementation of WebGPU
gfx-rs/wgpu: https://github.com/gfx-rs/wgpu :
> wgpu is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.
> The API is based on the WebGPU standard. It serves as the core of the WebGPU integration in Firefox and Deno
I was/am a bit confused: I think this is unrelated to the wgpu cross-API toolkit in rust, it just abbreviated WebGpu the same way?
Same, I had assumed they weren't independent.
/?PyPI wgpu: https://pypi.org/search/?q=wgpu
Looks like xgpu is where it's actually at.
xgpu: https://github.com/pyrym/xgpu :
> xgpu is an aggressively typed, red-squiggle-free Python binding of wgpu-native, autogenerated from the upstream C headers
wgpu-py has a conda-forge package: https://anaconda.org/conda-forge/wgpu-py
Plastic chemical phthalate causes DNA breakage, chromosome defects, study finds
Apart from food packaging, one great way to easily ingest plastic is to use synthetic clothing. Just a basic rubbing of a synthetic sleeve on your nose causes thousands of polyester particles to release in thin air, readily breathable.
Not just clothing, but also bedding is a huge issue. With pillows, mattresses and towels mostly made of synthetic fibers.
My usual instinct is: try rubbing the synthetic material; if it releases thousands of particles in thin air, stay away from it
Clothing industry has somehow gotten by unscathed during all the environmental awareness that has spread in the past 20 years. I am pretty sure clothes are the #1 cause of the microplastics that have inundated the ocean and our water supply.
We have been heavily pushed to drive less, recycle more, and use less water, but I have not seen messaging about not buying new clothes you don't need.
France passed a law back in 2020 to require new washing machines to have a microplastics filter by 2025:
https://www.europarl.europa.eu/doceo/document/E-9-2020-00137...
It has also begun to subsidize the clothing repair industry:
https://www.cnn.com/2023/07/13/business/france-shoe-clothing...
Maybe we should subsidize plastic-free fibers instead. Cotton, hemp, wool...
Linen; linen is made from the Flax plant.
Natural fibers: https://en.wikipedia.org/wiki/Natural_fiber
Green textiles: https://en.wikipedia.org/wiki/Green_textile
There are newer more sustainable production processes for various natural fibers.
TIL that there are special laundry detergents for synthetic fabrics like most activewear like jerseys; and that fabric softener attracts mosquitos and adheres to synthetic fibers causing stank.
Universal optimality of Dijkstra via beyond-worst-case heaps
"Universal Optimality of Dijkstra via Beyond-Worst-Case Heaps" (2024) https://arxiv.org/abs/2311.11793 :
> Abstract: This paper proves that Dijkstra's shortest-path algorithm is universally optimal in both its running time and number of comparisons when combined with a sufficiently efficient heap data structure.
Dijkstra's algorithm: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
NetworkX docs > Reference > Algorithms > Shortest Paths: https://networkx.org/documentation/stable/reference/algorith...
networkX.algorithms.shortest_path.dijkstra_path: https://networkx.org/documentation/stable/reference/algorith... https://github.com/networkx/networkx/blob/main/networkx/algo...
/? Dijkstra manim: https://www.google.com/search?q=dijkstra%20manim
A deep dive into Linux's new mseal syscall
- "Memory Sealing "Mseal" System Call Merged for Linux 6.10" (2024) https://news.ycombinator.com/item?id=40474510#40474551 :
> How should CPython support the mseal() syscall?
LibLISA – Instruction Discovery and Analysis on x86-64
I've long wanted to have a way to see what actually happens inside a CPU when a set of instructions are executed. I'm pretty excited after skimming this paper as it looks like they developed a technique to automatically determine how the x86-64 instructions actually work by observing real world CPU behavior.
From https://news.ycombinator.com/item?id=33563857 :
> Memory Debugger
Valgrind > Memcheck, None: https://en.wikipedia.org/wiki/Valgrind ; TIL Memcheck's `none` provides a traceback where the shell would normally just print "Segmentation fault"
DynamoRio > Dr Memory: https://en.wikipedia.org/wiki/DynamoRIO#Dr._Memory
Intel Pin: https://en.wikipedia.org/wiki/Pin_(computer_program)
https://news.ycombinator.com/item?id=22095435, : SoftICE, EPT, Hypervisor, HyperDbg, PulseDbg, BugChecker, pyvmidbg (libVMI + GDB), libVMI Python, volatilityfoundation/volatility, Google/rekall -> yara, winpmem, Microsoft/linpmem, AVML,
rr, Ghidra Trace Format: https://github.com/NationalSecurityAgency/ghidra/discussions... https://github.com/NationalSecurityAgency/ghidra/discussions... : appliepie, orange_slice, cannoli
GDB can help with register introspection: https://web.stanford.edu/class/archive/cs/cs107/cs107.1202/l... :
> Auto-display and Printing Registers: The `info reg` command [and `info all-registers` (`i r a`)]
emu86 implements X86 instructions in Python, optionally in Jupyter notebooks; still w/o X86S, SIMD, AVX-512, x86-84-v4
> Valgrind `none` provides a traceback where the shell would normally just print "Segmentation fault"
What would it take to get `bash` to print a --traceback after "Segmentation fault\n", and then possibly also --gdb like pytest --pdb?
- [ ] ENH: bash: add valgrind `none`-style ~ --traceback after "Segmentation fault" given an env var or by default?
A DSL for peephole transformation rules of integer operations in the PyPy JIT
math.fma fused multiply add is in Python 3.13. Are there already rules to transform expressions to math.fma?
And does Z3 verification indicate differences in output due to minimizing float-rounding error?
https://docs.python.org/3/library/math.html#math.fma :
> math.fma(x, y, z)
> Fused multiply-add operation. Return (x * y) + z, computed as though with infinite precision and range followed by a single round to the float format. This operation often provides better accuracy than the direct expression (x * y) + z.
> This function follows the specification of the fusedMultiplyAdd operation described in the IEEE 754 standard.
I only support integer operations in the DSL so far.
(but yes, turning the python expression x*y+z into an fma call would not be a legal optimization for the jit anyway. And Z3 would rightfully complain. The results must be bitwise identical before and after optimization)
Smarter Than 'Ctrl+F': Linking Directly to Web Page Content
Is there a specified / supported way to include other parameters in the URI fragment if the fragment part of the URI starts with :~:text=?
https://example.com/page.html#:~:text=[prefix-,]textStart[,textEnd][,-suffix]
https://example.com/page.html#:~:text=...
https://example.com/page.html#:~:text=...¶m2=two&:~:param3=three
w3c/web-annotation#442: "Selector JSON to IRI/URI Mapping: supporting compound fragments so that SPAs work" (2019): https://github.com/w3c/web-annotation/issues/442WICG/scroll-to-text-fragment#4: "Integration with W3C Web Annotations" (2019): https://github.com/WICG/scroll-to-text-fragment/issues/4#iss...
Room temp chirality switching and detection in a helimagnetic MnAu2 thin film
"Room temperature chirality switching and detection in a helimagnetic MnAu2 thin film" (2024) https://www.nature.com/articles/s41467-024-46326-4 :
> Helimagnetic structures, in which the magnetic moments are spirally ordered, host an internal degree of freedom called chirality corresponding to the handedness of the helix. The chirality seems quite robust against disturbances and is therefore promising for next-generation magnetic memory. While the chirality control was recently achieved by the magnetic field sweep with the application of an electric current at low temperature in a conducting helimagnet, problems such as low working temperature and cumbersome control and detection methods have to be solved in practical applications. Here we show chirality switching by electric current pulses at room temperature in a thin-film MnAu2 helimagnetic conductor. Moreover, we have succeeded in detecting the chirality at zero magnetic fields by means of simple transverse resistance measurement utilizing the spin Berry phase in a bilayer device composed of MnAu2 and a spin Hall material Pt. These results may pave the way to helimagnet-based spintronics.
Computer Organization, free online textbook
emu86 emulates x86, ARM, RISC-V, and WASM instruction sets in Python and in Jupyter Notebooks (which can be graded with ottergrader, nbgrader,)
assembler/virtual_machine.py: https://github.com/gcallah/Emu86/blob/master/assembler/virtu... :
- class RISCVMachine(VirtualMachine): https://github.com/gcallah/Emu86/blob/master/assembler/virtu...
- class WASMMachine(VirtualMachine): https://github.com/gcallah/Emu86/blob/master/assembler/virtu...
assembler/RISCV/key_words.py: https://github.com/gcallah/Emu86/blob/master/assembler/RISCV...
assembler/RISCV/arithmetic.py: https://github.com/gcallah/Emu86/blob/master/assembler/RISCV...
simd, avx, X86S, x86-64-v4,
x86-64 > Microarchitecture levels: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...
The new x86 Alliance, Intel-AMD x86 ISA overhaul org.
"Show HN: RISC-V Linux Terminal emulated via WASM" (2023) and ARM Tetris: https://news.ycombinator.com/item?id=37286019 https://westurner.github.io/hnlog/#story-37286019
From https://news.ycombinator.com/item?id=37086102 :
> [ Bindiff, Diaphora, Ghidra + GDB, ]
> Category:Instruction_set_listings has x86 but no aarch64 [or RISC-V] https://en.wikipedia.org/wiki/Category:Instruction_set_listi...
> /? jupyter asm [kernel]:
> - "Introduction to Assembly Language Tutorial.ipynb" w/ the emu86 jupyter kernel which shows register state after ops: https://github.com/gcallah/Emu86/blob/master/kernels/Introdu...
> - it looks like emu86 already also supports RISC, MIPS, and WASM but not yet ARM:
> - DeepHorizons/iarm: https://github.com/DeepHorizons/iarm/blob/master/iarm_kernel...
JupyterLite docs > adding other kernels: https://jupyterlite.readthedocs.io/en/latest/howto/configure...
A bit lower level, but possibly now with graphene FET transistors:
From "A carbon-nanotube-based tensor processing unit" (2024) https://news.ycombinator.com/item?id=41322070#41322134 :
> "Ask HN: How much would it cost to build a RISC CPU out of carbon?"
"Show HN: RISC-V assembly tabletop board game (hack your opponent)" (2023) https://news.ycombinator.com/item?id=37704760
Migration of the build system to autosetup
OP here!
Richard Hipp, of sqlite fame, has announced that the sqlite project plans to reimplement its canonical build process in the 3.48 release cycle. The project will, if all goes as planned, be moving away from the GNU Autotools and towards Steve Bennett's Autosetup, which we've used to good effect since 2011 in SQLite's sibling project, the Fossil SCM.
The primary motivation behind this HN post is to draw eyes to what Richard points out in the linked forum post: this port _will_ break some folks' build processes and we'd like to know, in advance, what we'll be breaking so that we can make that transition as painless as we can. If you do "unusual" things with the SQLite build, please let us know about it (preferably in the linked-to forum, as that's less likely to be overlooked than comments on HN are, but we'll be following this thread for at least a few days for those of you who would prefer to voice your thoughts here).
Conda-forge/sqlite-feedstock; recipe/meta.yml,
recipe/build.sh: https://github.com/conda-forge/sqlite-feedstock/blob/main/re...
recipe/build.bat: https://github.com/conda-forge/sqlite-feedstock/blob/main/re...
Is there a [multi-stage] Dockerfile to build, install and test SQLite with the new build system?
Is there SLSA signing for the SQLite build artifacts? There should be a way to sign OBS Open Build Service packages with sigstore.dev's Cosign to Rekor, too IIUC.
slsa-framework/slsa-github-generator: https://github.com/slsa-framework/slsa-github-generator
> Conda-forge/sqlite-feedstock; recipe/meta.yml,
Aside from the cross-compilation, which is most definitely part of the process but has not yet been explored in depth[^1], nothing in your build immediately sticks out at as problematic, but i've bookmarked it for future reference during the port.
The .bat build (Windows) will not change via this port. Only Unix-like environments will potentially be affected.
> Is there a [multi-stage] Dockerfile to build, install and test SQLite with the new build system?
Not that the sqlite project maintains, no. None of us use docker in any capacity. We work only from the canonical source tree and we like to think that it's easy enough to do that other folks can too.
> Is there SLSA signing for the SQLite build artifacts?
That's out of scope for, and unrelated to, this particular sub-project.
> slsa-framework/slsa-github-generator: https://github.com/slsa-framework/slsa-github-generator
says:
> This repository contains free tools to generate and verify SLSA Build Level 3 provenance for native GitHub projects using GitHub Actions.
We don't use github except to post a read-only mirror of the canonical source tree.
[^1]: Autosetup has strong support for cross-compilation, it just hasn't yet been tested in the being-ported build process.
You can sign built artifacts with cosign without actions or podman[-desktop]/docker.
It's probably possible to trigger gh actions or gl pipelines builds (container, command) with git repo mirroring.
datasette-lite depends on a WASM build of SQLite; what about WASM and e.g. emscripten-forge and micromamba?
Edit
emscripten-forge/recipes/recipes_emscripten/SQLite/build.sh: https://github.com/emscripten-forge/recipes/blob/main/recipe... ... recipe.yml says 2022: https://github.com/emscripten-forge/recipes/blob/main/recipe...
> You can sign built artifacts with cosign without actions or podman[-desktop]/docker.
That's all out of scope for, and unrelated to, this build port.
> It's probably possible to trigger gh actions or gl pipelines builds (container, command) with git repo mirroring.
We maintain our own SCM, which keeps us far removed from the github ways of doing things. None of the SQLite project members actively use github beyond posting the read-only project mirror and occasionally cloning other projects and/or participating in tickets, and that's unlikely to change in any foreseeable future.
> emscripten-forge/recipes/recipes_emscripten/SQLite/build.sh:
Thank you - bookmarked for later perusal and testing.
Same. Thanks
What about extensions; does the new build port affect extensions?
/? hnlog sqlite (190+ mentions)
From https://news.ycombinator.com/item?id=40837610#40842365 :
- > There are many extensions of SQLite; rqlite (Raft in Go,), cr-sqlite (CRDT in C), postlite (Postgres wire protocol for SQLite), electricsql (Postgres), sqledge (Postgres), and also WASM: sqlite-wasm, sqlite-wasm-http, dqlite (Raft in Rust),
awesome-sqlite
> What about extensions; does the new bill port affect extensions?
It won't, in any way, affect how third parties use the conventional project headers and amalgamated sources. It only has the potential to affect folks who use the "./configure && make" build process, as differences between the Autotools and Autosetup make it impossible to retain 100% semantic compatibility with the build processes (for reasons described one link removed from the forum post this HN post links to).
Discovery of Memory "Glue" Explains Lifelong Recall: KIBRA Protein, PKMzeta
ScholarlyArticle: "KIBRA anchoring the action of PKMζ maintains the persistence of memory" (2024) https://www.science.org/doi/10.1126/sciadv.adl0030 :
> How can short-lived molecules selectively maintain the potentiation of activated synapses to sustain long-term memory? Here, we find kidney and brain expressed adaptor protein (KIBRA), a postsynaptic scaffolding protein genetically linked to human memory performance, complexes with protein kinase Mzeta (PKMζ), anchoring the kinase’s potentiating action to maintain late-phase long-term potentiation (late-LTP) at activated synapses. Two structurally distinct antagonists of KIBRA-PKMζ dimerization disrupt established late-LTP and long-term spatial memory, yet neither measurably affects basal synaptic transmission. Neither antagonist affects PKMζ-independent LTP or memory that are maintained by compensating PKCs in ζ-knockout mice; thus, both agents require PKMζ for their effect. KIBRA-PKMζ complexes maintain 1-month-old memory despite PKMζ turnover. Therefore, it is not PKMζ alone, nor KIBRA alone, but the continual interaction between the two that maintains late-LTP and long-term memory
Are these the primary pathways of long-term forgetting rate, and what about NSCs, hippocampal neurogeneration, memory reconsolidation, and representation drift (connectome temporal instability)?
From https://news.ycombinator.com/item?id=35886145 :
> "Representational drift: Emerging theories for continual learning and experimental future directions" (2022) https://www.sciencedirect.com/science/article/pii/S095943882... :
>> Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks—a phenomenon called representational drift
Perhaps also useful for treating memory disorders:
From "Neuroscientists discover mechanism that can reactivate dormant neural stem cells" NSCs (2024) https://news.ycombinator.com/item?id=41875693 :
> "SUMOylation of Warts kinase promotes neural stem cell reactivation" (2024) https://www.nature.com/articles/s41467-024-52569-y
Why Verifiable Credentials Aren't Widely Adopted and Why Trinsic Pivoted
FWIW Blockcerts.org is an open standard for certs verifiable with just cert-verifier-js, and grantable with blockchain-certificates/cert-issuer: https://GitHub.com/blockchain-certificates
Is there a way to store Blockcerts (W3C Verifiable Credentials) in popular wallets yet?
Superconducting flux qubit with ferromagnetic π-junction with 0 magnetic field
"Superconducting flux qubit with ferromagnetic Josephson π-junction operating at zero magnetic field" (2024) https://www.nature.com/articles/s43246-024-00659-1 :
> Abstract: Conventional superconducting flux qubits require the application of a precisely tuned magnetic field to set the operation point at half a flux quantum through the qubit loop, which complicates the on-chip integration of this type of device. It has been proposed that by inducing a π-phase shift in the superconducting order parameter using a precisely controlled nanoscale-thickness superconductor/ferromagnet/superconductor Josephson junction, commonly referred to as π-junction, it is possible to realize a flux qubit operating at zero magnetic flux. Here, we report the realization of a zero-flux-biased flux qubit based on three NbN/AlN/NbN Josephson junctions and a NbN/PdNi/NbN ferromagnetic π-junction. The qubit lifetime is in the microsecond range, which we argue is limited by quasiparticle excitations in the metallic ferromagnet layer. Our results pave the way for developing quantum coherent devices, including qubits and sensors, that utilize the interplay between ferromagnetism and superconductivity.
How do the new Type III superconductors affect this approach?
"Topological gauge theory of vortices in type-III superconductors" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.110.0... .. https://news.ycombinator.com/item?id=41803654#41803662
Understanding Linux Message Queues
What about ebpf and mq? How do network packet io buffers differe from MQ patterns and implementations? Isn't ebpf faster than e.g. kpoll?
/? ebpf "message queue"
laminarmq: https://github.com/arindas/laminarmq :
> - [ ] Single node, multi threaded, eBPF based request to thread routed message queue
A step toward fully 3D-printed active electronics
Reading the article, it looks like so far they only have a working resettable fuse (a passive device), and only hypothesize that a transistor was possible with the copper-infused PLA filament. So no actual working active electronics.
And from the paper linked in the article[1], it seems the actual breakthrough is the discovery that copper-infused PLA filament exhibits a PTC-effect, which is noteworthy, but definitely not "3D-Printed Active Electronics" newsworthy.
[1] https://www.tandfonline.com/doi/full/10.1080/17452759.2024.2...
From https://news.ycombinator.com/item?id=40759133 :
> In addition to nanolithography and nanoassembly, there is 3d printing with graphene.
And conductive aerogels, and carbon nanotube production at scale
From https://news.ycombinator.com/item?id=41210021 :
> There's already conductive graphene 3d printing filament (and far less conductive graphene). Looks like 0.8ohm*cm may be the least resistive graphene filament available: https://www.google.com/search?q=graphene+3d+printer+filament...
> Are there yet CNT or Twisted SWCNT Twisted Single-Walled Carbon Nanotube substitutes for copper wiring?
Aren't there carbon nanotube superconducting cables?
Instead of copper, there are plastic waveguides
Language is not essential for the cognitive processes that underlie thought
This is an important result.
The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.
What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.
This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.
[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...
"Language models can explain neurons in language models" https://news.ycombinator.com/item?id=35877402#35886145 :
> Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks—a phenomenon called representational drift.
[...]
So, I'm not sure how conclusive this fmri activation study is either.
Though, is there a proto language that's not even necessary for the given measured aspects of condition?
Which artificial network architecture best approximates which functionally specialized biological neutral networks?
OpenCogPrime:KnowledgeRepresentation > Four Types of Knowledge: https://wiki.opencog.org/w/OpenCogPrime:KnowledgeRepresentat... :
> Sensory, Procedural, Episodic, Declarative
From https://news.ycombinator.com/item?id=40105068#40107537 re: cognitive hierarchy and specialization :
> But FWIU none of these models of cognitive hierarchy or instruction are informed by newer developments in topological study of neural connectivity;
Common mistakes to avoid when managing a cap table
I’m currently working on the cap table for my early-stage startup and want to make sure we get it right from the start. What are some common mistakes or pitfalls to avoid when structuring and maintaining a cap table? Any tools or best practices that have worked well for others?
- "Ask HN: Why don’t startups share their cap table and/or shares outstanding?" https://news.ycombinator.com/item?id=29141668#29141796
Capitalization table: https://en.wikipedia.org/wiki/Capitalization_table
Removing PGP from PyPI (2023)
This is slightly old news. For those curious, PGP support on the modern PyPI (i.e. the new codebase that began to be used in 2017-18) was always vestigial, and this change merely polished off a component that was, empirically[1], doing very little to improve the security of the packaging ecosystem.
Since then, PyPI has been working to adopt PEP 740[2], which both enforces a more modern cryptographic suite and signature scheme (built on Sigstore, although the design is adaptable) and is bootstrapped on PyPI's support for Trusted Publishing[3], meaning that it doesn't have the fundamental "identity" problem that PyPI-hosted PGP signatures have.
The hard next step from there is putting verification in client hands, which is the #1 thing that actually makes any signature scheme actually useful.
[1]: https://blog.yossarian.net/2023/05/21/PGP-signatures-on-PyPI...
It's good that PyPI signs whatever is uploaded to PyPI using PyPI's key now.
GPG ASC support on PyPI was nearly as useful as uploading signatures to sigstore.
1. Is it yet possible to - with pypa/twine - sign a package uploaded to PyPI, using a key that users somehow know to trust as a release key for that package?
2. Does pip check software publisher keys at package install time? Which keys does pip trust to sign which package?
3. Is there any way to specify which keys to trust for a given package in a requirements.txt file?
4. Is there any way to specify which keys to trust for a version of a given package with different bdist releases, with Pipfile.lock, or pixi or uv?
People probably didn't GPG sign packages on PyPI because it wasn't easy or required to sign a package using a registered key/DID in order to upload.
Anyone can upload a signature for any artifact to sigstore. Sigstore is a centralized cryptographic signature database for any file.
Why should package installers trust that a software artifact publisher key [on sigstore or the GPG keyserver] is a release key?
gpg --recv-key downloads a public key for a given key fingerprint over HKP (HTTPS with the same CA cert bundle as everything else).
GPG keys can be wrapped as W3C DIDs FWIU.
W3C DIDs can optionally be centrally generated (like LetsEncrypt with ACME protocol).
W3C DIDs can optionally be centrally registered.
GPG or not, each software artifact publisher key must be retrieved over a different channel than the packages.
If PYPI acts as the (package,release_signing_key) directory and/or the keyserver, is that any better than hosting .asc signatures next to the downloads?
GPG signatures and wheel signatures were and are still better than just checksums.
Why should we trust that a given key is a release signing key for that package?
Why should we trust that a release signing key used at the end of a [SLSA] CI build hasn't been compromised?
How do clients grant and revoke their trust of a package release signing key with this system?
... With GPG or [GPG] W3C DIDs or whichever key algo and signed directory service.
> It's good that PyPI signs whatever is uploaded to PyPI using PyPI's key now.
You might have misunderstood -- PyPI doesn't sign anything with PEP 740. It accepts attestations during upload, which are equivalent to bare PGP signatures. The big difference between the old PGP signature support and PEP 740 is that PyPI actually verifies the attestations on uploads, meaning that everything that gets stored and re-served by PyPI goes through a "can this actually be verified" sanity check first.
I'll try to answer the others piecewise:
1. Yes. You can use twine's `--attestations` flag to upload any attestations associated with distributions. To actually generate those attestations you'll need to use GitHub Actions or another OIDC provider currently supported as a Trusted Publisher on PyPI; the shortcut for doing that is to enable `attestations: true` while uploading with `gh-action-pypi-publish`. That's the happy path that we expect most users to take.
2. Not yet; the challenge there is primarily technical (`pip` can only vendor pure Python things, and most of PyCA cryptography has native dependencies). We're working on different workarounds for this; once they're ready `pip` will know which identity - not key - to trust based on each project's Trusted Publisher configuration.
3. Not yet, but this is needed to make downstream verification in a TOFU setting tractable. The current plan is to use the PEP 751 lockfile format for this, once it's finished.
4. That would be up to each of those tools to implement. They can follow PEP 740 to do it if they'd like.
I don't really know how to respond to the rest of the comment, sorry -- I find it a little hard to parse the connections you're drawing between PGP, DIDs, etc. The bottom line for PEP 740 is that we've intentionally constrained the initial valid signing identities to ones that can be substantiated via Trusted Publishing, since those can also be turned into verifiable Sigstore identities.
So PyPI acts as keyserver, and basically a CSR signer for sub-CA wildcard package signing certs, and the package+key mapping trusted authority; and Sigstore acts as signature server; and both are centralized?
And something has to call cosign (?) to determine what value to pass to `twine --attestations`?
Blockcerts with DID identities is the W3C way to do Software Supply Chain Security like what SLSA.dev describes FWIU.
And then now it's possible to upload packages and native containers to OCI container repositories, which support artifact signatures with TUF since docker notary; but also not yet JSON-LD/YAML-LD that simply joins with OSV OpenSSF and SBOM Linked Data on registered (GitHub, PyPI, conda-forge, deb-src,) namespace URIs.
GitHub supports requiring GPG signatures on commits.
Git commits are precedent to what gets merged and later released.
A rough chronology of specs around these problems: {SSH, GPG, TLS w/ CA cert bundles, WebID, OpenID, HOTP/TOTP, Bitcoin, WebID-TLS, TOTP, OIDC OpenID Connect (TLS, HTTP, JSON, OAuth2.0, JWT), TUF, Uptane, CT Logs, WebAuthn, W3C DID, Blockcerts, SOLID-OIDC, Shamir Backup, transferable passkeys, }
For PyPI: PyPI.org, TLS, OIDC OpenID Connect, twine, pip, cosign, and Sigstore.dev.
Sigstore Rekor has centralized Merkle hashes like google/trillian which centralizedly hosts Certificate Transparency logs of x509 certs grants and revocations by Certificate Authorities.
W3C DID Decentralized Identifiers [did-core] > 9.8 Verification Method Revocation : https://www.w3.org/TR/did-core/#verification-method-revocati... :
> If a verification method is no longer exclusively accessible to the controller or parties trusted to act on behalf of the controller, it is expected to be revoked immediately to reduce the risk of compromises such as masquerading, theft, and fraud
> So PyPI acts as keyserver, and basically a CSR signer for sub-CA wildcard package signing certs, and the package+key mapping trusted authority; and Sigstore acts as signature server; and both are centralized?
No -- there is no keyserver per se in PEP 740's design, because PEP 740 is built around identity binding instead of long-lived signing keys. PyPI does no signing at all and has no CA or CSR components; it acts only as an attestation store for attestations, which it verifies on upload.
PyPI signs uploaded packages with its signing key per PEP 458: https://peps.python.org/pep-0458/ :
> This [PEP 458 security] model supports verification of PyPI distributions that are signed with keys stored on PyPI
So that's deprecated by PEP 740 now?
PEP 740: https://peps.python.org/pep-0740/ :
> In addition to the current top-level `content` and `gpg_signature` fields, the index SHALL accept `attestations` as an additional multipart form field.
> The new `attestations` field SHALL be a JSON array.
> The `attestations` array SHALL have one or more items, each a JSON object representing an individual attestation.
> Each attestation object MUST be verifiable by the index. If the index fails to verify any attestation in attestations, it MUST reject the upload. The format of attestation objects is defined under Attestation objects and the process for verifying attestations is defined under Attestation verification.
What is the worst case resource cost of an attestation validation required of PyPI?
blockchain-certificates/cert-verifier-js; https://westurner.github.io/hnlog/ Ctrl-F verifier-js
`attestations` and/or `gpg_signature`;
From https://news.ycombinator.com/item?id=39204722 :
> An example of GPG signatures on linked data documents: https://gpg.jsld.org/contexts/#GpgSignature2020
W3C DID self-generated keys also work with VC linked data; could a new field like the `attestations` field solve for DID signatures on the JSON metadata; or is that still centralized and not zero trust?
Today is Ubuntu's 20th Anniversary
Ubuntu!
SchoolTool also started 20 years ago. Then Open edX, and it looks like e.g. Canvas for LMS these days.
To run an Ubuntu container with podman:
apt-get install -y podman distrobox
podman run --rm --it docker.io/ubuntu:24.04 bash --login
podman run --rm --it docker.io/ubuntu:24.10 bash --login
GitHub Codespaces are Ubuntu devcontainers (codespaces-linux)Windows Subsystem for Linux (WSL) installs Ubuntu (otherwise there's GitBash and podman-desktop for over there)
The jupyter/docker-stacks containers build FROM Ubuntu:24.04 LTS: https://github.com/jupyter/docker-stacks/blob/main/images/do... :
docker run -p 10000:8888 quay.io/jupyter/scipy-notebook:2024-10-07
All-optical switch device paves way for faster fiber-optic communication
It's funny to see this show up now. Optical-domain switching was a hot area of research in the early 2000s (e.g. [1]); it largely petered out as it became clear that digital-domain switching was more practical (because it didn't rely on exotic technology, and provided time for hardware to perform routing operations).
[1]: https://www.laserfocusworld.com/optics/article/16556781/many...
Is optical digital or analog?
Either/both. A nice feature of lightpath networks is that they are somewhat independent of the signal encoding.
Optical computation can also be digital or analog.
In this case, the control signal appears to be digital while the switched signal can be anything.
What is theoretical computer science?
This is a great article and I especially liked the notion:
>Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded. Analogously, I’d argue that the role of TCS is to explain/predict real-life computing.
as well as the emphasis on the difference between TCS in Europe and the US. I remember from the University of Crete that the professors all spent serious time in the labs coding and testing. Topics like Human-Computer Interaction, Operating Systems Research and lots of Hardware (VLSI etc) were core parts of the theoretical Computer Science research areas. This is why no UoC graduate could graduate without knowledge both in Algorithms and PL theory, for instance, AND circuit design (my experience is from 2002-2007).
I strongly believe that this breadth of concepts is essential to Computer Science, and the narrower emphasis of many US departments (not all) harms both the intellectual foundations and practical employment prospects of the graduate. [I will not debate this point online; I'll be happy to engage in hours long discussion in person]
>Theoretical physics is highly mathematical, but it aims to explain and predict the real world. Theories that fail at this “explain/predict” task would ultimately be discarded.
This isn't really true, is it? There are mathematical models that predict but do not explain the real world. The most glaring of them is the transmission of EM waves without a medium, and the particle/wave duality of matter. In the former case, there was a concerted attempt to prove existence of the medium (luminiferous aether) that failed and ended up being discarded - we accept now that no medium is required, but we don't know the physical process of how that works.
On the Cave and the Light,
List of popular misconceptions and science > Science, technology, and mathematics : https://en.wikipedia.org/wiki/List_of_common_misconceptions :
> See also: Scientific misconceptions, Superseded theories in science, and List of topics characterized as pseudoscience
Allegory of the cave > See also,: https://en.wikipedia.org/wiki/Allegory_of_the_cave
The only true statement:
All models are wrong: https://en.wikipedia.org/wiki/All_models_are_wrong
Map–territory relation: https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation :
> A frequent coda to "all models are wrong" is that "all models are wrong (but some are useful)," which emphasizes the proper framing of recognizing map–territory differences—that is, how and why they are important, what to do about them, and how to live with them properly. The point is not that all maps are useless; rather, the point is simply to maintain critical thinking about the discrepancies: whether or not they are either negligible or significant in each context, how to reduce them (thus iterating a map, or any other model, to become a better version of itself), and so on.
Counterintuitive Properties of High Dimensional Space (2018)
One that isn't listed here, and which is critical to machine learning, is the idea of near-orthogonality. When you think of 2D or 3D space, you can only have 2 or 3 orthogonal directions, and allowing for near-orthogonality doesn't really gain you anything. But in higher dimensions, you can reasonably work with directions that are only somewhat orthogonal, and "somewhat" gets pretty silly large once you get to thousands of dimensions -- like 75 degrees is fine (I'm writing this from memory, don't quote me). And the number of orthogonal-enough dimensions you can have scales as maybe as much as 10^sqrt(dimension_count), meaning that yes, if your embeddings have 10,000 dimensions, you might be able to have literally 10^100 different orthogonal-enough dimensions. This is critical for turning embeddings + machine learning into LLMs.
Does distance in feature space require orthogonality?
With real space (x,y,z) we omit the redundant units from each feature when describing the distance in feature space.
But distance is just a metric, and often the space or paths through it are curvilinear.
By Taxicab distance, it's 3 cats, 4 dogs, and 5 glasses of water away.
Python now has math.dist() for Euclidean distance, for example.
Near-orthogonality allows fitting in more directions for distinct concepts than the dimension of the space. So even though the dimension of an LLM might be <2000, far far more than 2000 distinct directions can fit into that space.
The term most often used is "superposition." Here's some material on it that I'm working through right now:
https://arena3-chapter1-transformer-interp.streamlit.app/%5B...
Skew coordinates aren't orthogonal.
Skew coordinates: https://en.wikipedia.org/wiki/Skew_coordinates
Are the feature described with high-dimensional spaces really all 90° geometrically orthogonal?
How does the distance metric vary with feature order?
Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless?
Re: superposition in this context, too
Are there multiple particles in the same space, or is it measuring a point-in-time sampling of the possible states of one particle?
(Can photons actually occupy the same point in spacetime? Can electrons? But the plenoptic function describes all light passing through a point or all of the space)
Expectation values are or are not good estimators of wave function outputs from discrete quantum circuits and real quantum systems.
To describe the products of the histogram PDFs
> Are the [features] described with high-dimensional spaces really all 90° geometrically orthogonal?
If the features are not statistically independent, I don't think it's likely that they're truly orthogonal; which might not affect the utility of a distance metric that assumes that they are all orthogonal.
Neuroscientists discover mechanism that can reactivate dormant neural stem cells
"SUMOylation of Warts kinase promotes neural stem cell reactivation" (2024) https://www.nature.com/articles/s41467-024-52569-y :
> Intro: Drosophila NSCs in the central brain (CB) and thoracic ventral nerve cord (VNC) enter into quiescence at the end of embryogenesis, and subsequently exit quiescence (reactivate) in response to dietary amino acids, typically within 24 h after larval hatching (h ALH) (Fig. 1a) [14,15,16]. The nutritional signals originate from the Drosophila fat body—functional equivalent of mammalian liver and adipose tissue [17]—and lead to activation of the insulin/insulin-like growth factor (IGF) signaling pathway [18,19] and inactivation of the Hippo signaling pathway in NSCs for their reactivation [18,19,20,21].
Amplification of electromagnetic fields by a rotating body
Could this be used as an engine of some kind? The spinny thing giving off EM waves and those waves are caught by something like a solar sail?
ScholarlyArticle: "Amplification of electromagnetic fields by a rotating body" (2024) https://www.nature.com/articles/s41467-024-49689-w
> Could this be used as an engine of some kind?
What about helical polarization?
"Chiral Colloidal Molecules And Observation of The Propeller Effect" https://pmc.ncbi.nlm.nih.gov/articles/PMC3856768/
Sugar molecules are asymmetrical / handed, per 3blue1brown and Steve Mould. /? https://www.google.com/search?q=Sugar+molecules+are+asymmetr....
Is there a way to get to get the molecular propeller effect and thereby molecular locomotion, with molecules that contain sugar and a rotating field or a rotating molecule within a field?
Fallacies of Distributed Computing
Fallacies of distributed computing: https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu... :
> The originally listed fallacies are:
> The network is reliable; Latency is zero; Bandwidth is infinite; The network is secure; Topology doesn't change; There is one administrator; Transport cost is zero; The network is homogeneous;
All asteroids in Solar System, visualized
TIL about Kirkwood gaps. I knew about Jupiter leading to Hilda and Trojan groupings, but my understanding of the main belt was more in line with a representation like this one from Wikipedia[1]. However, this imaging shows a clear gap and a quick search led me to learn about Kirkwood gaps. Is there a specific reason why the gap is more evident in this imaging rather than the above one from Wikipedia?
[1] - https://en.wikipedia.org/wiki/File:InnerSolarSystem-en.png
TIL about Kirkwood gaps too: https://en.wikipedia.org/wiki/Kirkwood_gap :
> A Kirkwood gap is a gap or dip in the distribution of the semi-major axes (or equivalently of the orbital periods) of the orbits of main-belt asteroids. They correspond to the locations of orbital resonances with Jupiter.
(Similarly, shouldn't it be possible to infer photon phase without causing wave state collapse?)
/? Jupiter’s effect on Earth’s climate https://www.google.com/search?q=Jupiter%E2%80%99s+effect+on+...
Time Duration in JavaScript
Looks like Safari Technical Preview TP only? https://caniuse.com/?search=temporal
https://caniuse.com/mdn-javascript_builtins_temporal_duratio...
Temporal.Duration: https://tc39.es/proposal-temporal/docs/#Temporal-Duration
IIRC Python datetime objects initially lacked timezone support and a timezone database.
datetime.MINYEAR and the "can't represent datetimes before the Unix epoch, or before 0001-01-01" problem.
E.g. Astropy supports astronomical year numbering and year zero.
Year Zero: https://en.wikipedia.org/wiki/Year_zero
Python datetime module: https://docs.python.org/3/library/datetime.html
A free phonetic generator (spelling, IPA transcription, prononciation)
Ideas for language learning products w/ IPA: https://news.ycombinator.com/item?id=38317345
The deconstructed Standard Model equation
The original TeX representation of the formula was written by Thomas D. Gutierrez in 1999 [1]. It was discussed many times on HN, initially in 2016 a day after the post of this article on symmetrymagazine [2].
Are there (SymPy,) CAS versions of the Standard Model Lagrangians and corollaries? Maybe latex2sympy2 (antlr)?
Mathematical formulation of the Standard Model: https://en.wikipedia.org/wiki/Mathematical_formulation_of_th...
WebPKI – Introduce Schedule of Reducing Validity (Of TLS Server Certificates)
Letsencrypt wildcard certs are valid for 30 days, and regular certs are valid for 90 days but they recommend renewing them after 60 days.
Cert validity intervals directly affect the storage and bandwidth requirements for CT logs, which should be replicated.
Does anyone serve the CT Certificate Transparency logs for checking by browsers?
Topological gauge theory of vortices in type-III superconductors
"Topological gauge theory of vortices in type-III superconductors" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.110.0... :
> Abstract: Usual superconductors fall into two categories, type I, expelling magnetic fields, and type II, into which magnetic fields exceeding a lower critical field H_c1 penetrate in a form of vortices characterized by two scales, the size of the normal core, \Xi, and the London penetration depth \Lambda. Here we demonstrate that a type-III superconductivity, realized in granular media in any dimension, hosts vortex physics in which vortices have no cores, are logarithmically confined, and carry only a gauge scale \Lambda. Accordingly, in type-III superconductors H_c1=0 at zero temperature and the Ginzburg-Landau theory must be replaced by a topological gauge theory. Type-III superconductivity is destroyed not by Cooper pair breaking but by vortex proliferation generalizing the Berezinskii-Kosterlitz-Thouless mechanism to any dimension.
Berezinskii–Kosterlitz–Thouless (BKT) transition: https://en.wikipedia.org/wiki/Berezinskii%E2%80%93Kosterlitz...
Ask HN: Parameter-free neural network models: Limits, Challenges, Opportunities?
Why should experts bias NN architectural parameters if there are "Parameter-free" neural network graphical models?
> Why should experts bias NN architectural parameters if there are "Parameter-free" neural network graphical models?
The Asimov Institute > The Neural Network Zoo: https://www.asimovinstitute.org/neural-network-zoo
"A mostly complete chart of neural networks" (2019) includes Hopfield nets! https://www.asimovinstitute.org/wp-content/uploads/2019/04/N...
Category:Neural network architectures: https://en.wikipedia.org/wiki/Category:Neural_network_archit...
Types of artificial neural networks: https://en.wikipedia.org/wiki/Types_of_artificial_neural_net...
Systems Modeling to Refine Strategy
From "Architectural Retrospectives: The Key to Getting Better at Architecting" https://news.ycombinator.com/item?id=41234471 :
> Is there already a good way to link an ADR Architectural Decision Record with Threat Modeling primitives and considerations?
Awesome-threat-modeling > Tools: https://github.com/hysnsec/awesome-threat-modelling#free-too...
- pytm: https://github.com/izar/pytm
- threatspec has a docstring spec for adding threat modeling annotations to source code: https://threatspec.org/
- OWASP Threat Dragon : https://owasp.org/www-project-threat-dragon/ :
> Threat Dragon supports STRIDE / LINDDUN / CIA / DIE / PLOT4ai, provides modeling diagrams and implements a rule engine to auto-generate threats and their mitigations.
- OWASP Threat Modeling Cheat Sheet > Systems Modeling: https://cheatsheetseries.owasp.org/cheatsheets/Threat_Modeli...
- https://github.com/dehydr8/elevation-of-privilege:
> An online multiplayer version of the Elevation of Privilege (EoP) threat modeling card game
Re: "Thinking in Systems" by Donella Meadows:
- "Limits to Growth: The 30-Year Update" (2012) by Donella H. Meadows. https://a.co/7MgO0bv
- "Leverage Points: Places to Intervene in a System" (2018) https://news.ycombinator.com/item?id=17781927 & wikipedia links to systems theory
Google must open Android for third-party stores, rules Epic judge
Epic's "First Run" program does all the things they got mad at Apple and Google about.
You don't have to pay any license fees for Unreal Engine if you use Epic exclusively for payments. They give you 100% revshare for 6 months if you agree to not ship your game on any other app store.
Let's not kid ourselves, Epic never cared about consumer choice or a fair playing field, they only want the ability to profit without having to invest in building a hardware platform.
All I care about is having some healthy competition in the marketplace again.
If that takes tying Google's hands behind its back for a few years, fine.
Taking a 30% cut should have been, prima facie, evidence of monopoly abuse.
Tying Google's hands while giving Apple a clean bill of health isn't going to increase competition, it's going to solidify Apple's lead in the US.
The competition that actually matters is between whole platforms, it's only Epic's lawyers who want everyone to get fixated on the idea that the app store markets are the whole story (or even really a significant portion of the story). I could totally get behind efforts to prevent Google and Apple from together duopolizing the entire mobile phone space, but this is not that.
The important competition isn't Google v Apple.
It's everyone else v Google and/or Apple.
Ironically, forcing this onto Google might be the best thing for Android, as it's the openness the platform really needed to compete against Apple, but costs Google too much revenue to ever do it if left to their own decision.
> it's the openness the platform really needed to compete against Apple
It’s hard to believe the best way to compete with Apple is by being more open then they are.
If that worked, we would all be running some unix os on our phones.
It’s clearly an utterly false premise.
This is going to hurt google and do literally nothing to Apple, except perhaps scare some people into the “saftey” of its curated ecosystem.
apologies for the pendantic nitpick, but:
iOS is based on Darwin which is BSD based, and Android is based on Linux, which is heavily inspired by Unix.
So technically, we are "all running some unix os on our phones"... perhaps you mean some open-source OS? Even then, Android markets itself as being open-source (although I imagine they must add some closed source stuff on top of it).
Apple XCode is required to compile code for Apple iOS. Android Studio is not required to compile code for Android.
Android Studio doesn't run on Android. XCode IDE doesn't run on iOS.
XCode IDE requires an MacOS device. Android Studio works on Win/Mac/Linux/Chromebook_with_containers.
Android Studio runs in a Linux container. XCode only runs on MacOS.
Apple does not allow, and per a recent ruling, and DOES NOT HAVE TO allow 3rd party app stores.
(There are already 3rd party app "stores" for Android like FDroid.)
Android already allows, and per a recent ruling, MUST allow 3rd party app stores.
In terms of Application stores, (after trying to trademark the phrase "app store" to anticompete Amazon) Apple gets protectionism for their walled garden, and Android may not have a walled garden.
Apple went with Objective-C; C with standard macros; IIRC before C++?
Apple built a new language called Swift.
The Java pitch was "write once, run anywhere" because you port the JVM, and J2ME.
Google rewrote many parts of the Java stack.
Google ported to Apache Harmony/ OpenJDK to port away from Sun then Oracle Java.
Google built a language called Kotlin.
Swift runs on MacOS, iOS, and Android.
Kotlin runs on Android/iOS/Win/Mac/Linux/JS. Dart lang also runs on any platform. Flutter also runs on any platform.
Gamma radiation is produced in large tropical thunderstorms
This somehow reminds me of the fact that you can produce (surprisingly high-quality) x-rays by unrolling scotch tape in a vacuum chamber[0][1]. I wonder if it turns out to be related in any way. Thunderstorms aren't a vacuum of course, but I dunno, maybe all that frozen hail being thrown around can bumping into each other still involves a similar underlying mechanism somewhere.
What about guitar pedal velcro tape?
[0] "Correlation between nanosecond X-ray flashes and stick–slip friction in peeling tape" (2008) https://www.nature.com/articles/nature07378
> Triboluminescence is a phenomenon in which light is generated when a material is mechanically pulled apart, ripped, scratched, crushed, or rubbed (see tribology). [...] Triboluminescence is often a synonym for fractoluminescence (a term mainly used when referring only to light emitted from fractured crystals). Triboluminescence differs from piezoluminescence in that a piezoluminescent material emits light when deformed, as opposed to broken. These are examples of mechanoluminescence, which is luminescence resulting from any mechanical action on a solid. [...] See also:
> - Earthquake light: https://en.wikipedia.org/wiki/Earthquake_light
- re: gold from earthquake-induced piezoelectricity in quartz: https://news.ycombinator.com/item?id=41442489
- re: sustainable alternatives to quartz countertops: https://news.ycombinator.com/item?id=36857929 :
> [...] make recycled paper outdoor waterproof vert ramps, countertops, siding, and flooring; but not yet roofing FWIU?
> - Sonoluminesence: https://en.wikipedia.org/wiki/Sonoluminescence
If you dropped a thing through a storm to the water, would it charge the thing, from gamma radiation?
TIL carbon nano yarn absorbs electricity, probably from storm clouds too.
What are the volt and charge observations for lightning from large tropical thunderstorms?
(And why is it dangerous to attract arc discharge toward a local attractor? And what sort of supercapacitors and anodes can handle charge from a lightning bolt? Lightning!)
Tardigrades can handle Gamma radiation.
"Researchers create new type of composite material for shielding against neutron and gamma radiation" (2024) https://phys.org/news/2024-05-composite-material-shielding-n... :
"Sm2O3 micron plates/B4C/HDPE composites containing high specific surface area fillers for neutron and gamma-ray complex radiation shielding" https://www.sciencedirect.com/science/article/abs/pii/S02663...
On lightning and safety and electrostatics: "Answer to Is there a device that attracts lightning when storms are near? How can I make a lightning rod to do experiments with lightning?" (202_ yrs ago) https://www.quora.com/Is-there-a-device-that-attracts-lightn... :
Electrostatics: https://en.wikipedia.org/wiki/Electrostatics
The Photoelectric effect that supports the Particle-Wave duality > History: https://en.wikipedia.org/wiki/Photoelectric_effect :
> Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called the Planck constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a step in the development of quantum mechanics. In 1914, Robert A. Millikan's highly accurate measurements of the Planck constant from the photoelectric effect supported Einstein's model, even though a corpuscular theory of light was for Millikan, at the time, "quite unthinkable". [47] Einstein was awarded the 1921 Nobel Prize in Physics for "his discovery of the law of the photoelectric effect", [48] and Millikan was awarded the Nobel Prize in 1923 for "his work on the elementary charge of electricity and on the photoelectric effect".[49]
Does this help to explain the genetic diversity of the tropical latitudes; is the genetic mutation rate higher in the presence of gamma radiation?
So many of our plants and flowers (here in North America) originate from rainforests and tropical latitudes, but survive at current temps for northern latitudes.
No, insignificant Is diverse because there is more biomass
/? is the genetic mutation rate higher in the presence of gamma radiation: https://www.google.com/search?q=is%20the%20genetic%20mutatio... :
- "Frequency and Spectrum of Mutations Induced by Gamma Rays Revealed by Phenotype Screening and Whole-Genome Re-Sequencing in Arabidopsis thaliana" (2022) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8775868/ :
> Gamma rays have been widely used as a physical agent for mutation creation in plants, and their mutagenic effect has attracted extensive attention. However, few studies are available on the comprehensive mutation profile at both the large-scale phenotype mutation screening and whole-genome mutation scanning.
- "Bedrock radioactivity influences the rate and spectrum of mutation" (2020) https://elifesciences.org/articles/56830
- "Comparison of mutation spectra induced by gamma-rays and carbon ion beams" (2024) https://academic.oup.com/jrr/article/65/4/491/7701037 :
> These results suggest that carbon ion beams produce complex DNA damage, and gamma-rays are prone to single oxidative base damage, such as 8-oxoguanine. Carbon ion beams can also introduce oxidative base damage, and the damage species is 5-hydroxycytosine. This was consistent with our previous results of DNA damage caused by heavy ion beams. We confirmed the causal DNA damage by mass spectrometry for these mutations.
Homemade AI drone software finds people when search and rescue teams can't
{Code-and-Response, Call-for-Code}/DroneAid : "DroneAid: A Symbol Language and ML model for indicating needs to drones, planes" (2010) https://github.com/Code-and-Response/DroneAid .. https://westurner.github.io/hnlog/#story-22707347 :
CORRECTION: All but one of the DroneAid Symbol Language Symbols are drawn within upward pointing triangles.
Is there a simpler set of QR codes for the ground that could be made with sticks or rocks or things the wind won't bend?
Show HN: Compiling C in the browser using WebAssembly
Cling (the interactive C++ interpreter) should also compile to WASM.
There's a xeus-cling Jupyter kernel, which supports interactive C++ in notebooks: https://github.com/jupyter-xeus/xeus-cling
There's not yet a JupyterLite (WASM) kernel for C or C++.
Forget Superconductors: Electrons Living on the Edge Could Unlock Perfect Power
ScholarlyArticle: "Observation of chiral edge transport in a rapidly rotating quantum gas" (2024) https://www.nature.com/articles/s41567-024-02617-7
- "Electrical switching of the edge current chirality in quantum Hall insulators" (2023) https://news.ycombinator.com/item?id=38139569
Does this affect the standing issue of whether more current flows mostly through a wire or mostly down the edges, and whether stranded wire is better for a given application?
Copper conductor: https://en.wikipedia.org/wiki/Copper_conductor
Skin effect: https://en.wikipedia.org/wiki/Skin_effect :
> In electromagnetism, skin effect is the tendency of an alternating electric current (AC) to become distributed within a conductor such that the current density is largest near the surface of the conductor and decreases exponentially with greater depths in the conductor. It is caused by opposing eddy currents induced by the changing magnetic field resulting from the alternating current. The electric current flows mainly at the skin of the conductor, between the outer surface and a level called the skin depth.
No evidence social media time is correlated with teen mental health problems
The APA has lost it's credibility some time ago with it's political stances. It doesn't take a researcher to see a correlation between phone time and youth issues.
It takes a professional to determine whether harassing them about their electronics is the actual cause of the behaviors in question.
From "New Mexico: Psychologists to dress up as wizards when providing expert testimony" https://news.ycombinator.com/item?id=40085678 :
> Clean Language
LG Chem develops material capable of suppressing thermal runaway in batteries
Does this prevent lithium - water reaction? Is that fire, or?
When current gen EV batteries catch fire in fresh and saltwater flood waters, is that due to thermal runaway?
(Twisted SWCNT carbon nanotubes don't have these risks at all FWIU)
Twisted SWCNT is still mostly a concept. Doesn't even have proof of concept in the lab. According to people involved in it, it may take a couple of decades to go to market. But I agree, it would be great to have " wind up" cars, for example.
Business idea: form dense shipping containers of addressed arrays of [twisted SWCNT, rhombohedral trilayer graphene, or double-gated graphene,]? They would need: anodes, industry standard commercial and residential solar connectors, water-safe EV charge connectors, and/or "Mega pack" connectors to jump charge semi-trucks from a container on a flatbed or a sprinter van.
Branded, stackable Intermodal shipping containers with charge controllers
It's probably already possible to thermoform and ground biocomposite shipping containers, with ocean recovery loops?
Bast fiber anodes for super capacitors are inexpensive, and there may be something to learn from their naturally branching shape. https://www.google.com/search?q=bast%20fiber%20supercapacito...
"Scientists grow carbon nanotube forest much longer than any other" (2020) https://www.nanowerk.com/nanotechnology-news2/newsid=56546.p...
"Growing ultra-long carbon nanotubes" https://youtube.com/watch?v=VyULskYuGvg
"Ultra-long carbon nanotube forest via in situ supplements of iron and aluminum vapor sources" (2020) https://dx.doi.org/doi:10.1016/j.carbon.2020.10.066 .. https://scholar.google.com/citations?hl=en&user=1zqRM3UAAAAJ...
https://news.ycombinator.com/item?id=41159447 :
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
> 583 Wh/kg
Yes, obvious once. Only light years away, I'm afraid. Try and get funding for that. GLWT
> In its announcement, LG Chem has reported that in both battery impact and penetration tests, the batteries equipped with the thermal runaway suppression material either did not catch fire at all or extinguished the flames shortly after they appeared, preventing a full-blown thermal runaway event.
> The material comes in the form of a thin layer, just 1 micrometer (1μm) thick – about one hundredth the thickness of human hair – positioned between the cathode layer and the current collector (an aluminum foil that acts as the electron pathway). When the battery’s temperature rises beyond the normal range, between 90°C and 130°C, the material reacts to the heat, altering its molecular structure and effectively suppressing the flow of current, LG Chem said.
> The material is decribed as highly responsive to temperature, with its electrical resistance increasing by 5,000 ohms (Ω) for every 1°C rise in temperature. The material’s maximum resistance is over 1,000 times higher than at normal temperatures, and it also features reversibility, meaning the resistance decreases and returns to its original state, allowing the current to flow normally again once the temperature drops.
To the best of my searching, a typical lithium ion battery has roughly 20-30 cathode-and-current-conductor layers. So naively this would add less than 50 microns to the battery thickness.
Also, the open-access article is here: https://www.nature.com/articles/s41467-024-52766-9
ScholarlyArticle: "Thermal runaway prevention through scalable fabrication of safety reinforced layer in practical Li-ion batteries" (2024) https://www.nature.com/articles/s41467-024-52766-9
Dance training superior to physical exercise in inducing brain plasticity (2018)
I do MRI work, and my gut is that none of the claims about dance vs. exercise would replicate. The behavioral data suggests that activity of some type will improve cognitive function (main effects of time). Such beneficial effects of activity on the brain have been shown before, and this is generally accepted. However, the authors' behavioral data doesn't show any difference between the dance vs. exercise groups. This means that the study is overall off to a pretty bad start if their goal is to study dance vs. exercise differences...
The brain data claims to show that the dance vs. exercise groups showed different levels of improvement in various regions. However, the brain effects are tiny and are probably just random noise (I'm referring to those red spots, which are very small and almost certainly don't reflect proper correction for multiple hypotheses given that the authors effectively tested 1000s or 10000s of different areas). The authors' claims about BDNF are supported by a p-value of p = .046, and having main conclusions hinge on p-values of p > .01 usually means the conclusions are rubbish.
In general, my priors on "we can detect subtle changes in brain matter over a 6-week period" are also very low. Perhaps, a study with this sample size could show that activity of some kind influences the brain over such a short length, but I am extremely skeptical that this type of study could detect differences between dance vs. exercise effects.
Cardiovascular exercise correlates with subsequent synthesis of endocannabinoids, which affect hippocampal neurogeneration and probably thereby neuroplasticity.
From "Environment shapes emotional cognitive abilities more than genes" (2024) https://news.ycombinator.com/item?id=40105068#40107602 :
> hippocampal plasticity and hippocampal neurogenesis also appear to be affected by dancing and omega-3,6 (which are transformed into endocannabinoids by the body): https://news.ycombinator.com/item?id=15109698
Also, isn't there selection bias to observational dance studies? If not in good general health, is a person likely to pursue regular dancing? Though, dance lifts mood and gets the cardiovascular system going.
Dance involves spatial sense and proprioception and probably all of the senses.
[deleted]
John Wheeler saw the tear in reality
/? "Wheeler's bags of gold" https://www.google.com/search?q=wheelers+bags+of+gold
Holographic principle: https://en.wikipedia.org/wiki/Holographic_principle :
> The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood.
Evidence of 'Negative Time' Found in Quantum Physics Experiment
"Experimental evidence that a photon can spend a negative amount of time in an atom cloud" (2024) https://arxiv.org/abs/2409.03680
Additional indications of retrocausality:
- "Robust continuous time crystal in an electron–nuclear spin system" (2024) https://www.nature.com/articles/s41567-023-02351-6 .. https://news.ycombinator.com/item?id=39291044
"3D waves flowing over Sierpinski carpets" gives intuition about (convolutional) wave products or outcomes but doesn't show any retrocausality from here: https://youtube.com/watch?v=8yddkbwrqss&
An adult fruit fly brain has been mapped
It was my understanding that all this connectome-based research was largely a deadend, because it doesnt capture dynamics, nor a vast array of interactions. if you've ever seen neurones being grown (go search YT), you'll see it's a massive gelatinous structure which is highly plastic and highly dynamic. Even in the simplest brains (eg., of elgans), you get 10^x exponential growth in number of neurones and their connections as it grows.
From https://news.ycombinator.com/item?id=35877402#35886145 :
> So, to run the same [fMRI, NIRS,] stimulus response activation observation/burn-in again weeks or months later with the same subjects is likely necessary given Representational drift
And isn't there n-ary entanglement?
Is the world really running out of sand?
We fools here in Germany sometimes _pay_ to get rid of excess electricity when it's very sunny and windy. How about having some rock crushing machines that instead use that cheap electricity to make more sand?
Thanks for the puns, too.
Free power doesn’t mean free production.
There’s also labor, wear on the machines and the lost opportunity of using your money to do something else. Building such crushing machines and only use them x% of the time (for, for now, fairly small values of x) may not be a good investment.
So, energy storage should be lucrative and incentivized for us to solve the Duck Curve, and Alligator Curve, and overcharging the grid forces the price to zero or unlucratively below problems.
For firms with depreciating datacenter assets that are underutilized, a Research Coin like Grid Coin or an @home distributed computation project can offset costs
Datacenters scrap old compute - that there's hardly electronics recycling for - rather than keep it online due to relative cost in FLOPS/kWhr and the cost of conditioned space.
Doesn't electronics recycling recover the silica?
Can we make CPUs out of graphene made out of recycled plastic?
Can we make superconductive carbon computers that waste less electricity as heat?
Hopefully, Proof of Work miners that aren't operating on grants have an incentive to maximize energy efficiency with graphene ASICs and FPGAs and now TPUs
Make Pottery at Home Without a Kiln (Or Anything Else) [video]
Title: "Without a Kiln" Video: Build a kiln at backyard.
Boo. It is not a solution if you live in multi-store house in the center of big city (and small electric kiln IS a solution in this case).
Pit firing pottery isn't with a kiln, they fire pottery to much lower temperatures.
And yes, not every solution is for everybody... For people that don't live in a city, doing this in their backyard is more accessible than buying equipment.
finding a class that uses a shared space kiln would likely be cheaper and more rewarding
Some pottery kiln places will only fire their own clay, which must be to spec.
And safety glasses to handle warm clay that's been heated at all in a kiln or a fire.
Looked at making unglazed terracotta ollas for irrigation and couldn't decide whether a 1/4" silicone microsprinkler tubing port should go through the lid or the side.
Terracotta filters water, so presumably ollas would need to be recycled eventually due to the brawndo in the tap water and rainwater.
/? how to filter water with a terracotta pot
It looks like only the Nat Geo pottery wheel has a spot to attach a wooden guide to turn against; the commercial pottery wheels don't have a place to attach attachments that are needed for pottery.
Also neat primitive pottery skills: Primitive Skills, Primitive Technology
"Primitive Skills: Piston Bellows (Fuigo)" https://youtube.com/watch?v=CHdmlnAA010&
"Primitive Technology: Water Bellows smelt" https://youtube.com/watch?v=UdjVnGoNvU4&
Megalithic Geopolymers require water glass FWIU
/? how to make concrete planters
But rectangularly-formed concrete doesn't filter water like unglazed terracotta
The first proposal for a solar system domain name system
An Interplanetary DNS Model: https://www.ietf.org/archive/id/draft-johnson-dtn-interplane...
W3C DID Decentralized Identifiers sk/pk can be generated offline and then the pk can optionally be registered online. If there is no connection to the broader internet - as is sometimes the case during solar storms in space - offline key generation without a central authority would ensure ongoing operations.
Blockcerts can be verified offline.
EDNS Ethereum DNS avoids various pitfalls of DNS, DNScrypt, DNSSEC, and DoH and DoT (which depend upon NTP which depends upon DoH which depends upon the X.509 cert validity interval and the system time being set correctly).
Dapps don't need DNS and are stored in the synchronized (*) chain.
From https://news.ycombinator.com/item?id=32753994 :
> W3C DID pubkeys can be stored in a W3C SOLID LDPC: https://solid.github.io/did-method-solid/
> Re: W3C DIDs and PKI systems; CT Certificate Transparency w/ Merkle hashes in google/trillian and edns,
And then actually handle money with keys in space because how do people pay for limited time leases on domain names in space?
In addition to Peering, Clearing, and Settlement, ILP Interledger Protocol Specifies Addresses: https://news.ycombinator.com/item?id=36503888
> ILP is not tied to a single company, payment network, or currency
ILP Addresses - v2.0.0 > Allocation Schemes: https://github.com/interledger/rfcs/blob/main/0015-ilp-addre...
Essential node in global semiconductor supply chain hit by Hurricane Helene
Possibly a dumb question, but why can't we grow synthetic quartz in the same way we grow silicon wafers? Or can we and it's just not cost effective vs mining?
We can but synthetic quartz faces the same problem as hydrocarbon fuels: we can make synthetic natural gas if we use enough energy, or we could exploit the geological processes that created it over millions of years and extract it.
High-purity quartz from areas like Spruce Pine typically forms in pegmatites, where slow cooling of magma allows large, defect-free crystals to form. Hydrothermal fluids permeate these rocks while they’re cooling, effectively leaching out impurities. If the geochemistry is just right, over millions of years, this process repeats several times creating very high purity quartz deposits that are very difficult to replicate in laboratory conditions.
Can nanoassembly produce quartz more efficiently?
/? nanoassembly of quartz https://www.google.com/search?q=nanoassembly%20of%20quartz mentions hydrothermal synthesis,
Which other natural processes affect the formation of quartz and gold?
From https://news.ycombinator.com/item?id=41437036#41442489 :
> "Gold nugget formation from earthquake-induced piezoelectricity in quartz" (2024) https://www.nature.com/articles/s41561-024-01514-1 [...]
> Are there phononic excitations from earthquake-induced piezoelectric and pyroelectric effects?
> Do (surface) plasmon polaritons SPhP affect the interactions between quartz and gold, given heat and vibration as from an earthquake?
> "Extreme light confinement and control in low-symmetry phonon-polaritonic crystals" like quartz https://arxiv.org/html/2312.06805v2
The best browser bookmarking system is files
I completely disagree.
If the built-in bookmark systems in browsers could support tags, then I would say yes. However, it currently only supports a basic tree concept, with "folders" for links.
This is very one dimensional. I read loads of articles that talks about multiple topics. Especially Hacker News type articles :). An article can talk about, say geo-politics. As an example, perhaps an article on the recent pagers that exploded in Lebenon. This article may also be discussing some cybersecurity topics too. In this case I may want to tag it with 1->n tags.
I currently use Raindrop.io. It kinda works, but it doesn't really have what I have in mind. It also has more features than I think I need from a bookmarking app.
I kinda feel that Digg (wayback, it was one of the first 'Web 2.0' sites had a model that could work.
If I had enough motivation, I think I could probably produce a simple app that does tagging, and only tagging, with bookmarks.
Not sure about other browsers, but Firefox's built-in bookmarks support tags - no need for external apps.
Firefox can store bookmark tags, but they don't save with the bookmark export without reading the SQLite database with a different tool: "Allow reading and writing bookmark tags" (9 years ago) https://bugzilla.mozilla.org/show_bug.cgi?id=1225916
With bookmarks as JSONLD Linked Data, it's simple to JOIN with additional data about a given URI.
The WebExtensions Bookmark API does not yet support tags.
Firefox's bookmarks manager exports tags just fine (whether to JSON or the bookmarks.html format). WebExtension APIs are a completely separate issue.
Show HN: Iceoryx2 – Fast IPC Library for Rust, C++, and C
Hello everyone,
Today we released iceoryx2 v0.4!
iceoryx2 is a service-based inter-process communication (IPC) library designed to make communication between processes as fast as possible - like Unix domain sockets or message queues, but orders of magnitude faster and easier to use. It also comes with advanced features such as circular buffers, history, event notifications, publish-subscribe messaging, and a decentralized architecture with no need for a broker.
For example, if you're working in robotics and need to process frames from a camera across multiple processes, iceoryx2 makes it simple to set that up. Need to retain only the latest three camera images? No problem - circular buffers prevent your memory from overflowing, even if a process is lagging. The history feature ensures you get the last three images immediately after connecting to the camera service, as long as they’re still available.
Another great use case is for GUI applications, such as window managers or editors. If you want to support plugins in multiple languages, iceoryx2 allows you to connect processes - perhaps to remotely control your editor or window manager. Best of all, thanks to zero-copy communication, you can transfer gigabytes of data with incredibly low latency.
Speaking of latency, on some systems, we've achieved latency below 100ns when sending data between processes - and we haven't even begun serious performance optimizations yet. So, there’s still room for improvement! If you’re in high-frequency trading or any other use case where ultra-low latency matters, iceoryx2 might be just what you need.
If you’re curious to learn more about the new features and what’s coming next, check out the full iceoryx2 v0.4 release announcement.
Elfenpiff
Links:
* GitHub: https://github.com/eclipse-iceoryx/iceoryx2 * iceoryx2 v0.4 release announcement: https://ekxide.io/blog/iceoryx2-0-4-release/ * crates.io: https://crates.io/crates/iceoryx2 * docs.rs: https://docs.rs/iceoryx2/0.4.0/iceoryx2/
How does this compare to and/or integrate with OTOH Apache Arrow which had "arrow plasma IPC" and is supported by pandas with dtype_backend="pyarrow", lancedb/lancedb, and Serde.rs? https://serde.rs/#data-formats
Another IPC bus: "Varlink – IPC to replace D-Bus gradually in systemd" https://news.ycombinator.com/item?id=41687413
Serde does serialization and serialization with many formats in Rust
CBD shows promise as pesticide for mosquitoes
CBD is a differentiated byproduct of CBGA FWIU.
For: Epilepsy, Seizures, Schizophrenia, Mosquito control
Bats, Dragonflies, and Mosquito Fish all eat mosquitos.
Watercress roots filter water and harbor algae which fish eat too.
Algae produces fat soluble DHA and EPA Omega-3 PUFAs polyunsaturated fatty acids, which the mammalian (and almost drosophila) ECS Endocannabinoid system appears to synthesize endocannabinoids from. Endogenous cannabinoids are sensitive to consumption levels of Omega 6s and Omega 3; and endogenous cannabinoids are synthesized from Omegas.
Minnows, goldfish, guppies, bass, bluegill, and catfish all eat mosquito larvae.
CBD kills mosquitoes?
Early mammals - the water rat - probably ate small fish like minnows and sardines, which are high in Omega 3s.
Was this knowledge known by belligerents in a historical conflict?
FWIW this year I learned that dish soap kills wasps, ants, and other insects.
Does dish soap kill mosquitoes like other insects?
Scientific publishing has been gamed to advance scientists’ careers
Grants, Impact Factor, can't publish or apply for a grant because I don't have university affiliation, Exclusivity agreement, you can only upload the preprint to ArXiV because you paid a journal an application fee and they're going to take all of the returns from distributing your work with threaded comments.
What other industry must pay an application fee to give an exclusive to a government-granted Open Access research study?
You should tell them that they must pay you.
(And that they must host [linked] data, and reproducible containers, and code.)
Text2CAD: Generating sequential cad designs from text prompts
From https://news.ycombinator.com/item?id=40131766 re: LLM minifigs and parametric CAD parts libraries:
> Is there a blenderGPT-like tool trained on build123d Python models?
> ai-game-development tools lists a few CAD LLM apps like blenderGPT and blender-GPT: https://github.com/Yuan-ManX/ai-game-development-tools#3d-mo...
REPL for Dart
Even though I don’t use Dart myself I very much approve of this initiative.
I live in my Python REPL (ipython) and I couldn’t live without my hot reloading without losing state. Allows for hacking about a bit directly in the REPL. Once something starts to take shape you can push it into a file, import it into your REPL then hack about with it in a text editor.
The hot reloading is great when you have a load of state in classes and you can change code in the editor and have that immediately updated on your existing objects in the REPL.
jupyter-dart-kernel: https://github.com/vickumar1981/jupyter-dart-kernel :
pip install jupyter-console jupyterlab -e https://github.com/vickumar1981/jupyter-dart-kernel#egg=jupyterdartkernel
jupyter console --kernel jupyterdartkernel
jupyter kernelspec --list
IPython/extensions/autoreload.py: https://github.com/ipython/ipython/blob/main/IPython/extensi... docs: https://ipython.readthedocs.io/en/stable/config/extensions/a...Ocean waves grow way beyond known limits
> The findings could have implications for how offshore structures are designed, weather forecasting and climate modeling, while also affecting our fundamental understanding of several ocean processes.
Does this translate to terrestrial and deep space signals? FWIU there's research in rogue waves applied to EM waves and DSN, too
What is the lowest power [parallel] transmission that results in such rogue wave effects, with consideration for local RF regulations?
> Professor Ton van den Bremer, a researcher from TU Delft, says the phenomenon is unprecedented, "Once a conventional wave breaks, it forms a white cap, and there is no way back. But when a wave with a high directional spreading breaks, it can keep growing."
> Three-dimensional waves occur due to waves propagating in different directions. The extreme form of this is when wave systems are "crossing," which occurs in situations where wave system meet or where winds suddenly change direction, such as during a hurricane. The more spread out the directions of these waves, the larger the resulting wave can become.
ScholarlyArticle: "Three-dimensional wave breaking" (2024) https://www.nature.com/articles/s41586-024-07886-z
Rogue wave > 21st century, Other uses of the term "rogue wave": https://en.wikipedia.org/wiki/Rogue_wave
' > See also links to Branched flow and Resonance
Branched flow: https://en.wikipedia.org/wiki/Branched_flow :
> Branched flow refers to a phenomenon in wave dynamics, that produces a tree-like pattern involving successive mostly forward scattering events by smooth obstacles deflecting traveling rays or waves. Sudden and significant momentum or wavevector changes are absent, but accumulated small changes can lead to large momentum changes. The path of a single ray is less important than the environs around a ray, which rotate, compress, and stretch around in an area preserving way.
Are vortices in Compressible and Incompressible fluids area preserving?
(Which brings us to fluid dynamics and superhydrodynamics or better i.e. superfluid quantum gravity with Bernoulli and navier-stokes, and vortices of curl, and gravity (maybe with gravitons) if the particles are massful, otherwise we call it "radiation pressure" and "solar wind" and it also causes relative displacement)
Resonance: https://en.wikipedia.org/wiki/Resonance :
> For an oscillatory dynamical system driven by a time-varying external force, resonance occurs when the frequency of the external force coincides with the natural frequency of the system.
I doubt this is relevant for EM waves. Water waves are much more complicated because the wave speed in water is highly frequency dependent. That makes things much harder to model. Whilst in optics (and thus EM) people have been reasoning in terms of wavefronts in 3d for about half a century. Look at a YouTube channel like Huygens optics to see what a former professional can easily do in his shed.
Photon waves are EM waves and particles, by the Particle-Wave Duality.
Photons behave like fluids in superfluids; "liquid light".
Also, the paths of photons are affected by the media of transmission: gravity, gravitational waves, and water fluid waves bend light.
Photonic transmission and retransmission occurs at least in part by phononic excitation of solids and fluids; but in the vacuum of space if there is no mass, how do quanta propagate?
Optical rogue waves > Principles: https://en.wikipedia.org/wiki/Optical_rogue_waves :
> Supercontinuum generation with long pulses: Supercontinuum generation is a nonlinear process in which intense input light, usually pulsed, is broadened into a wideband spectrum. The broadening process can involve different pathways depending on the experimental conditions, yielding varying output properties. Especially large broadening factors can be realized by launching narrowband pump radiation (long pulses or continuous-wave radiation) into a nonlinear fiber at or near its zero-dispersion wavelength or in the anomalous dispersion regime. Such dispersive characteristics support modulation instability, which amplifies input noise and forms Stokes and anti-Stokes sidebands around the pump wavelength. This amplification process, manifested in the time domain as a growing modulation on the envelope of the input pulse, then leads to the generation of high-order solitons, which break apart into fundamental solitons and coupled dispersive radiation
FWIU photonic superradiance is also due to critical condition(s).
Re: Huygens and photons; https://news.ycombinator.com/item?id=40492160 :
>> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity [given Huygens' applied]
TODO: remember the name of the photonic effect of self-convolution and nonlinearity and how the wave function interferes with itself in the interval [0,1.0] whereas EM waves are [-1,1] or log10 [-inf, inf].
Coherence, Wave diffraction vs. wave interference > Superposition principle: https://en.wikipedia.org/wiki/Superposition_principle#Wave_d...
Refactoring Python with Tree-sitter and Jedi
Would’ve been easy with fastmod: https://github.com/facebookincubator/fastmod
> I do wish tree-sitter had a mechanism to directly manipulate the AST. I was unable to simply rename/delete nodes and then write the AST back to disk. Instead I had to use Jedi or manually edit the source (and then deal with nasty off-set re-parsing logic).
Or libCST: https://github.com/Instagram/LibCST docs: https://libcst.readthedocs.io/en/latest/ :
> LibCST parses Python 3.0 -> 3.12 source code as a CST tree that keeps all formatting details (comments, whitespaces, parentheses, etc). It’s useful for building automated refactoring (codemod) applications and linters.
libcst_transformer.py: https://gist.github.com/sangwoo-joh/26e9007ebc2de256b0b3deed... :
> example code for renaming variables using libcst [w/ Visitors and Transformers]
Refactoring because it doesn't pass formal verification: https://deal.readthedocs.io/basic/verification.html#backgrou... :
> 2021. deal-solver. We released a tool that converts Python code (including deal contracts) into Z3 theorems that can be formally verified
Vim python-mode: https://github.com/python-mode/python-mode/blob/e01c27e8c17b... :
> Pymode can rename everything: classes, functions, modules, packages, methods, variables and keyword arguments.
> Keymap for rename method/function/class/variables under cursor
let g:pymode_rope_rename_bind = '<C-c>rr
python-rope/ropevim also has mappings for refactorings like renaming a variable: https://github.com/python-rope/ropevim#keybinding : C-c r r :RopeRename
C-c f find occurrences
https://github.com/python-rope/ropevim#finding-occurrencesTheir README now recommends pylsp-rope:
> If you are using ropevim, consider using pylsp-rope in Vim
python-rope/pylsp-rope: https://github.com/python-rope/pylsp-rope :
> Finding Occurrences: The find occurrences command (C-c f by default) can be used to find the occurrences of a python name. If unsure option is yes, it will also show unsure occurrences; unsure occurrences are indicated with a ? mark in the end. Note that ropevim uses the quickfix feature of vim for marking occurrence locations. [...]
> Rename: When Rename is triggered, rename the symbol under the cursor. If the symbol under the cursor points to a module/package, it will move that module/package files
SpaceVim > Available Layers > lang#python > LSP key Bindings: https://spacevim.org/layers/lang/python/#lsp-key-bindings :
SPC l e rename symbol
Vscode Python variable renaming:Vscode tips and tricks > Multi cursor selection: https://code.visualstudio.com/docs/getstarted/tips-and-trick... :
> You can add additional cursors to all occurrences of the current selection with Ctrl+Shift+L. [And then rename the occurrences in the local file]
https://code.visualstudio.com/docs/editor/refactoring#_renam... :
> Rename symbol: Renaming is a common operation related to refactoring source code, and VS Code has a separate Rename Symbol command (F2). Some languages support renaming a symbol across files. Press F2, type the new desired name, and press Enter. All instances of the symbol across all files will be renamed
But does Rope understand pytest fixtures? I doubt it, but would be happy to be proven wrong.
Yeah, there `sed` and `git diff` with one or more filenames in a variable might do.
Because pytest requires a preprocessing step, renaming fixtures is tough, and also for jupyter notebooks %%ipytest is necessary to call functions that start with test_ and upgrade assert keywords to expressions; e.g `assert a == b, error_expr` is preprocessed into `assertEqual(a,b, error_expr)` with an AssertionError message even for comparisons of large lists and strings.
Ask HN: What is the best online Vectorizer (.Png to .Svg)?
From https://news.ycombinator.com/item?id=38377307#38400435 (2023) :
> TIL about https://github.com/fogleman/primitive from "Comparison of raster-to-vector conversion software" https://en.wikipedia.org/wiki/Comparison_of_raster-to-vector... which does already list vtracer (2020)
The perils of transition to 64-bit time_t
The way this was handled on Mac OS X for `off_t` and `ino_t` might provide some insight: The existing calls and structures using the types retained their behavior, new calls and types with `64` suffixes were added, and you could use a preprocessor macro to choose which calls and structs were actually referenced—but they were hardly ever used directly.
Instead, the OS and its SDK are versioned, and at build time you can also specify the earliest OS version your compiled binary needes to run on. So using this, the headers ensured the proper macros were selected automatically. (This is the same mechanism by which new/deprecated-in-some-version annotations would get set to enable weak linking for a symbol or to generate warnings for it respectively.)
And it was all handled initially via the preprocessor, though now the compilers have a much more sophisticated understanding of what Apple refers to as “API availability.” So it should be feasible to use the same mechanisms on any other platform too.
Lol, that only works if you can force everyone on the platform to go along with it. It is a nice solution, but it requires you to control the c library. Gentoo doesn’t control what libc does; that’s either GNU libc or MUSL or some other thing that the user wants to use.
> requires you to control the c library
Which is why basically every other same operating system does that. BSDs, macOS, WinNT. Having the stable boundary be the kernel system call interface is fucking insane. And somehow the Linux userspace people keep failing to learn this lesson, no matter how many times they get clobbered in the face by the consequences of not learning it.
Making the stable boundary be C headers is insane!
It means that there's not actually any sort of ABI, only a C source API.
cibuildwheel builds manylinux packages for glibc>= and musl because of this ABI.
manylinux: https://github.com/pypa/manylinux :
> Python wheels that work on any linux (almost)
Static Go binaries that make direct syscalls and do not depend upon libc or musl run within very minimal containers.
Fuschia's syscall docs are nice too; Linux plus additional syscalls.
FFT-based ocean-wave rendering, implemented in Godot
20 years ago I could spend months tweaking ocean surface in renders and not get even close to that. Amazing how good this is!!
Although the demo clip feels a bit exaggerated (saying this having over 50k Nm open water ocean sailing in my logbook). Waves that sharp and high would need the wind blowing a lot stronger. But I am sure that is just a parameter adjustment away!
Since it is in Godot I assume the rendering is real time? Does it need a monster GPU?
This is not a criticism, just an observation: it looks like what I imagine an ocean of hot corn syrup would look like (after dyeing it blue). The viscosity seems right; possibly the surface tension is not what ocean water would have (a colloid of salty H2O and biomaterial, which is common in real-world experience but quite ugly for computational fluid dynamics).
Also note that the ocean spray here is a post-hoc effect, but for a real ocean the spray dulls the sharpness of the waves in a way that will be (vaguely) apparent visually.
Of course there's almost no "physics" in this elegant, simple, and highly effective model, so I want to emphasize that suggesting directions to poke around and try things should not be construed as an armchair criticism.
This is literally a criticism.
A) It would be a criticism if I thought these effects could be plausibly rendered with a similar FFT algorithm, but that seems unlikely to me. I think these results are "highly effective" given the toolset, which is not attempting to emulate the actual physics.
B) This project is not an all-out attempt to make lifelike water, it is described as an experiment. I am making an observation about the result of the experiment, not criticizing the project for failing to meet standards it wasn't holding itself to.
Neat that FFT yields great waves.
But ultimately, does that model model vortexes or other fluid dynamics?
Can this model a fluid vortex between 2-liter bottles with a 3d-printable plastic connector?
Curl, nonlinearity, Bernoulli, Navier-Stokes, and Gross-Pitaevskii are known tools for CFD computational fluid dynamics with Compressible and Incompressible fluids.
"Ocean waves grow way beyond known limits" (2024-09) https://news.ycombinator.com/item?id=41631177#41631975
"Gigantic Wave in Pacific Ocean Was the Most Extreme 'Rogue Wave' on Record" (2024-09) https://news.ycombinator.com/item?id=41548417#41550654
Keep in mind that this project is aimed at video game developers, not oceanographers :) The point is to get something cheap and plausible, not to solve Navier-Stokes with finite element methods.
Show HN: Test your website on 180+ device viewports (with multi-device mode)
Hey HN! We do a lot of frontend testing, and one thing we find frustrating is the limited tooling available to observe how a URL renders on different viewports/devices simultaneously.
For example, we can use Chrome Inspector's "Device" previews, but we run into the following:
- Chrome doesn't keep those devices/viewports up to date, and adding new ones is pretty annoying (we've open sourced our collection of 180+ viewports/dimensions under MIT).
- You can’t see multiple viewports at once (and it’s cumbersome switching back and forth)
- When you notice a problem, there's no simple way to deep-link someone on your team to take a look at that viewport (likely have to open a Jira/Linear with the specifics)
- You can’t favourite viewports (for tracking the viewports specific to your project/client)
We’ve built a product with the goal of addressing these limitations (among others), and would love to hear your feedback.
Finally, feel free to use the open sourced data for your own projects :)
This reminded me of a very similar one but for the desktop from a few years back. Here you go;
Source at https://github.com/responsively-org/responsively-app
shot-scraper is a CLI tool for HTML render snapshots with --height --width and --scale-factor (--retina ~= --scale-factor=2.0) args, and a -j/--javascript option to run JS before taking the screenshot.
There's not yet a way to resize the viewport and take another screenshot without loading the page again.
There's not yet a way to store (h,w,scale,showbrowserchrome) configurations in a YAML file; but shot-scraper does work in GitHub Actions.
Some web testing tools can record a video of a test session and save it if there's a test failure.
E.g. Playwright defaults to an 800x600 viewport for test videos to retain-on-failure or record only if on-first-retry: https://playwright.dev/docs/videos
Something like Cloudflare Browser Isolation - which splits the browser at an interface such that a Chrome browser server runs on a remote server and a Chrome client runs locally - might make it easier to compress recordings of n x test invocations (in isolated browsers or workers)
This is awesome: thanks for these resources! We'll take a look at Cloudflare Browser Isolation for sure. We've played with screenshot tools at specific viewport dimensions etc, but it's not often the right fit for when we're doing interactive frontend responsive updates. That aside though, the screenshot APIs could be a great integration to be able to download/save screenshots of specific viewports.
DevTools has a list of default viewport sizes, but they change from browser release to release. Is there a better way or an existing [YAML] file schema to generate a list of viewport configurations to import into DevTools?
We looked into this, and we couldn't find a way to do so, but we've open sourced the data (here: https://github.com/bitcomplete/labs-delta-viewport-tester-vi...) so if you are able to find a way to do so, that could be a great option.
Hey thanks! https://github.com/bitcomplete/labs-delta-viewport-tester-vi...
Use case: add which few of these to DevTools. IDK if there's a YAML parser in DevTools already, otherwise JSON + jq should work somehow
awesome-chrome-devtools: https://github.com/paulirish/awesome-chrome-devtools
Thanks for sharing this @westurner
Wasm2Mpy: Compiling WASM to MicroPython so it can run on Raspberrys
I feel like this project is a classic example of the README missing a "Why?" section. Not to justify the project, it just makes it hard to evaluate without understanding why they choose implementing a transpiler rather than an embedded WASM VM. Or why not transpile to native assembly? I'm sure they have good reasons but they aren't listed here.
i think your questions are implicitly answered in the top page of the readme, but by showing rather than telling
esp32 and stm32 run three different native assembly instruction sets (though this only supports two of them, the others listed in the first line of the readme are all arm)
the coremark results seem like an adequate answer to why you'd use a compiler rather than an interpreter
i do think it would be clearer, though, to explain up front that the module being built is not made of python code, but rather callable from python code
> i do think it would be clearer, though, to explain up front that the module being built is not made of python code, but rather callable from python code
I know nothing about micropython. Do the modules contain bytecode?
In this case, the modules contain native code compiled for the target architecture. Micropython has something approximating a dynamic linker to load them.
tools/mpy_ld.py: https://github.com/micropython/micropython/blob/master/tools...
tools/mpy-tool.py lists opcodes: https://github.com/micropython/micropython/blob/master/tools...
Can the same be done with .pyc files; what are the advantages of MicroPython native modules?
Why does it need wasm2c?
I don't think .pyc files can contain native code, but .mpy can! And that's exactly what I'm using here:
- compile any language to wasm
- translate wasm to C99
- use the native target toolchain to compile C99 to .mpy
- run that on the device
Linux 6.12 NFS Adds Localio Protocol for "Extreme" Performance Boost
> The LOCALIO protocol support allows the NFS client and server to reliably determine if they are on the same host. If they happen to be on the same host, the network RPC protocol for read/write/commit operations are bypassed. Due to bypassing the XDR and RPC for reads/writes/commits, there can be big performance gains to using the LOCALIO protocol.
> One of the intended scenarios where it can be common for having the NFS client and server on the same host is when running containerized workloads
What are the advantages to NFS [with localio] over OverlayFS/overlay2 and bind mounts across context boundaries for container filesystems?
Do the new faster direct read() and write() IO calls have different risks compared to disk io with network IO?
Are the file permissions the same with localio? How does that compare to subuids and subuids for rootless containers?
I can imagine say a media server running NFS that can be accessed by both local containers and remote hosts.
What I think would be interesting is if you could access NFS on a local machine from a container or VM without using a network. You could have a standalone VM with no networking that could access shared files. (I think some people use 9p for this?)
Show HN: Httpdbg – A tool to trace the HTTP requests sent by your Python code
Hi,
I created httpdbg, a tool for Python developers to easily debug HTTP(S) client requests in Python programs.
I developed it because I needed a tool that could help me trace the HTTP requests sent by my tests back to the corresponding methods in our API client.
The goal of this tool is to simplify the debugging process, so I designed it to be as simple as possible. It requires no external dependencies, no setup, no superuser privileges, and no code modifications.
I'm sharing it with you today because I use it regularly, and it seems like others have found it useful too—so it might be helpful for you as well.
Hope you will like it.
cle
Source: https://github.com/cle-b/httpdbg
Documentation: https://httpdbg.readthedocs.io/
A blog post on a use case: https://medium.com/@cle-b/trace-all-your-http-requests-in-py...
That's pretty cool! I was playing last night and implemented resumable downloads[0] for pip so that it could pick up where it stopped upon a network disconnect or a user interruption. It sucks when large packages, especially ML related, fail at the last second and pip has to download from scratch. This tool would have been nice to have. Thanks a bunch,
It is important to check checksums (and signatures, if there are any) of downloaded packages prior to installing them; especially when resuming interrupted downloads.
Pip has a hash-checking mode, but it only works if the hashes are listed in the requirements.txt file, and they're the hashes for the target platform. Pipfile.lock supports storeing hashes for multiple platforms, but requirements.txt does not.
If the package hashes are retrieved over the same channel as the package, they can be MITM'd too.
You can store PyPi package hashes in sigstore.
There should be a way for package uploaders to sign their package before uploading. (This is what .asc signatures on PyPi were for. But if they are retrieved over the same channel, cryptographic signatures can also be MITM'd).
IMHO (1) twine should prompt to sign the package (with a DID) before uploading the package to PyPi, and (2) after uploading packages, twine should download the package(s) it has uploaded to verify the signature.
; TCP RESET and Content-Range doesn't hash resources.
Thanks for the pointers. The diff is tiny and deals only with resuming downloads. i.e: everything else is left as is.
The Social Web Did Not Begin in 2008
Usenet was federated. And it is old...
Then there is https://irc-galleria.net which to me is clearly social media started in 2000... And I think there might be some older sites.
BBS Bulletin Board System > History: https://en.wikipedia.org/wiki/Bulletin_board_system
GNU/talk is a text chat system for users all on the same multiuser nix system. But it doesn't separate code and data; because of terminal control characters:
Talk (software) > History: https://en.wikipedia.org/wiki/Talk_(software)
Finger (protocol) was named finger. To update your finger "status message" you edit that file in your homedir: https://en.wikipedia.org/wiki/Finger_(protocol)
Finger was written in C and had overflow vulnerabilities, and I don't know whether it ever scaled beyond one multiuser system?
NNTP: https://en.wikipedia.org/wiki/Network_News_Transfer_Protocol
"Social Media" is distinct from "Media" in being a two-way* dialogue. Like only some newspaper sites, Social Media has comments (and UCC User Contributed Content).
If a newspaper site hosts comments on each article, will it need to be labeled as a potentially harmful social media outlet per proposed US media regulations? Does that make it social media?
User Contributed Content is seemingly inexpensive, but is it high-liability? Section 230 makes it possible to moderate without assuming liability for how people use the service to maim, defame, and harass others. Neither are arms dealers liable for crimes committed using the products that they sell. Media and Social Media operate without catastrophic liability for holding the mirror and abusive user-contributed content; instead of corporate doublespeak selling us into counterproductive wars that benefit only special interests (without a comment section) in a 24 hour news ticker that nobody checks.
But corporate content has displaced grassroots media by real people in so many of the now traditional social media outlets, and now everyone has to go to a different bar.
People grassroots campaigning for peace and equal rights; now replaced by the same old fear, greed, war, pestilence, and violence that TV and radio ad planners and newspaper editors use to sell ads.
Where "mass media" was an opiate for the masses, social media is a panacea for vanity and dysfunction.
Where "War of the Worlds" on the radio might've caused pandemonium back then, today people would likely debunk such claims of alien invasion via online trusted media outlets and bs rags.
FOAF: Friend of a Friend and/or schema.org/Person records describe a social graph of persons with attributes and CreativeWorks, but open record schema don't solve for the costs of indexing, search, video hosting, or federated moderation signals.
Scientists Propose Groundbreaking Method to Detect Single Gravitons
This is cool... One of the team’s proposed innovations is to use available data from LIGO as a data/background filter...
“The LIGO observatories are very good at detecting gravitational waves, but they cannot catch single gravitons,” notes Beitel, a Stevens doctoral student. “But we can use their data to cross-correlate with our proposed detector to isolate single gravitons.”
Wasn't there a new way to measure gravitational waves; Ctrl-F hnlog for LIGO and LISA:
"Mass-Independent Scheme to Test the Quantumness of a Massive Object" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... .. https://news.ycombinator.com/item?id=39048910 :
> This yields a striking result: a mass-independent violation of MR is possible for harmonic oscillator systems. In fact, our adaptation enables probing quantum violations for literally any mass, momentum, and frequency. Moreover, coarse-grained position measurements at an accuracy much worse than the standard quantum limit, as well as knowing the relevant parameters only to this precision, without requiring them to be tuned, suffice for our proposal. These should drastically simplify the experimental effort in testing the nonclassicality of massive objects ranging from atomic ions to macroscopic mirrors in LIGO.
From https://news.ycombinator.com/item?id=30847777 .. From "Massive Black Holes Shown to Act Like Quantum Particles" https://www.quantamagazine.org/massive-black-holes-shown-to-... :
> Physicists are using quantum math to understand what happens when black holes collide. In a surprise, they’ve shown that a single particle can describe a collision’s entire gravitational wave.
> "Scale invariance in quantum field theory" https://en.wikipedia.org/wiki/Scale_invariance#Scale_invaria...
"New ways to catch gravitational waves" https://news.ycombinator.com/item?id=40825994 :
- "Kerr-Enhanced Optical Spring" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... .. "Kerr-enhanced optical spring for next-generation gravitational wave detectors" (2024) https://news.ycombinator.com/item?id=39957123
- "Physicists Have Figured Out a Way to Measure Gravity on a Quantum Scale" with a superconducting magnetic trap made out of Tantalum (2024) https://news.ycombinator.com/item?id=39495482
- "Measuring gravity with milligram levitated masses" (2024) https://www.science.org/doi/10.1126/sciadv.adk2949
"Physicists Have Figured Out a Way to Measure Gravity on a Quantum Scale" https://news.ycombinator.com/item?id=39495482#39495570 .. https://news.ycombinator.com/item?id=30847777 :
> Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
"Distorted crystals use 'pseudogravity' to bend light like black holes do" https://news.ycombinator.com/item?id=38008449 :
"Deflection of electromagnetic waves by pseudogravity in distorted photonic crystals" (2023) https://journals.aps.org/pra/abstract/10.1103/PhysRevA.108.0... :
> We demonstrate electromagnetic waves following a gravitational field using a photonic crystal. We introduce spatially distorted photonic crystals (DPCs) capable of deflecting light waves owing to their pseudogravity caused by lattice distortion
Hybrid device does power generation and molecular solar thermal energy storage
"Hybrid solar energy device for simultaneous electric power generation and molecular solar thermal energy storage" (2024) https://www.cell.com/joule/fulltext/S2542-4351(24)00288-5 :
> Context & scale: [...] This paper proposes a hybrid device combining a molecular solar thermal (MOST) energy storage system with PV cell. The MOST system, made of elements like carbon, hydrogen, oxygen, fluorine, and nitrogen, avoids the need for rare materials. It serves as an optical filter and cooling agent for the PV cell, improving solar energy utilization and addressing the limitations of conventional PV and storage technologies.
- "Solar energy can now be stored for up to 18 years, say scientists" (2022) [ with MOST: Molecular Solar Thermal energy storage method ] https://news.ycombinator.com/item?id=34027606 :
- "Chip-scale solar thermal electrical power generation" (2022) https://www.cell.com/cell-reports-physical-science/fulltext/...
QED: A Powerful Query Equivalence Decider for SQL [pdf]
Compiling to Assembly from Scratch
Speaking of embedded systems, I have an old board, an offshoot of Zilog Z80 called Rabbit. I think recently Dave from EEVBlog took apart one of his ancient projects and I was floored to see a Rabbit. Talk about a left hook. Assuming this is Hacker News, I suspect someone probably knows what I’m talking about. The language used (called “Dynamic C”) has some unconventional additions, a kind of coroutine mechanism in addition to chaining functions to be called one after another in groups. It’s mostly C otherwise, so I suspect some macro shennanigans with interrupt vector priority for managing this coroutine stuff.
Anyhow, so I’ve got a bunch of .bin files around for it, no C source code, just straight assembly output ready to be flashed. And the text and data segments are often interwoven, each fetch being resolved to data or instruction in real time by the rabbit processor. So I’ve been thinking of sitting down, going through the assembly/processor manual for the board and just writing a board simulator hoping to get it back to source code by blackbox reversing in a sense. I’d have to rummage through JEDEC for the standard used by the EEPROM to figure out what pins it’s using there and the edge triggering sequences. Once I can execute it and see what registers and GPIOs are written when, I can probably figure out the original code path. Not sure if anyone has tips or guides or suggestions here.
Any chance a decompiler like ghidra might get you part ways there?
The closest I came across was a Z80 simulator, I forget the name of it. But it allowed you to step through command by command giving you ability to query the processor state in a terminal.
So I don’t know if it would be easier trying to find this, and updating it to support rabbit, vs trying to wedge something into ghidra which itself is an undertaking of a behemoth platform. As far as I know ghidra does not have Z80 support.
An Emu86.assembler.virtual_machine.Z8Machine (like MIPSMachine, RISCVMachine, and IntelMachine) could be stepped through in a notebook: https://news.ycombinator.com/item?id=41576922
Execsnoop: Monitors and logs all exec calls on system in real-time
You can accomplish the same with bpftrace:
bpftrace -e 'tracepoint:sched:sched_process_exec { time("%H:%M:%S"); printf(" uid = %d pid = %d cmd = %s \n", uid, pid, comm); } tracepoint:syscalls:sys_enter_execve { time("%H:%M:%S"); printf(" uid = %d pid = %d cmd_with_args = ", uid, pid); join(args->argv); }'
From https://news.ycombinator.com/item?id=37442312 re why not ptrace for tracing all exec() syscalls:
> The Falco docs list 3 syscall event drivers: Kernel module, Classic eBPF probe, and Modern eBPF probe: https://falco.org/docs/event-sources/kernel/
Consider adding ppid to the mix - cometimes _what_ started a process is also quite valuable (if firefox starts bash, worry more than, say, sshd)
It looks like the falco rules mention proc.ppid.duration, but there's not yet a rule that matches on ppid: rules/falco_rules.yaml https://github.com/falcosecurity/rules/blob/main/rules/falco... :
> Tuning suggestions include looking at the duration of the parent process (proc.ppid.duration) to define your long-running app processes. Checking for newer fields such as proc.vpgid.name and proc.vpgid.exe instead of the direct parent process being a non-shell application could make the rule more robust.
Glass Antenna Turns windows into 5G Base Stations
Would this work with peptide glass?
"A self-healing multispectral transparent adhesive peptide glass" https://www.nature.com/articles/s41586-024-07408-x :
> Moreover, the supramolecular glass is an extremely strong adhesive yet it is transparent in a wide spectral range from visible to mid-infrared. This exceptional set of characteristics is observed in a simple bioorganic peptide glass composed of natural amino acids, presenting a multi-functional material that could be highly advantageous for various applications in science and engineering.
Is there a phononic reason for why antenna + window?
Bass kickers, vibration speakers like SoundBug, and bone conductance microphones like Jawbone headsets are all transducers, too
Transducer: https://en.wikipedia.org/wiki/Transducer
https://news.ycombinator.com/item?id=41442489 :
> FWIU rotating the lingams causes vibrations which scare birds away.
Patch Proposed for Adding x86_64 Feature Levels to the Kernel
> The proposed patch adds the x86_64 micro-architecture feature levels v2 / v3 / v4 as new Kconfig options if wanting to cater to them when building the kernel with modern GCC and LLVM Clang compilers. It's just some Kconfig logic to then set the "-march=x86-64-v4" or similar compiler options for the kernel build.
How much would these compiler flags save a standard datacenter and then an HPV center?
A Friendly Introduction to Assembly for High-Level Programmers
Biased opinion(because this is how I did it in uni): Before looking at modern x86, you should learn 32-bit MIPS.
It's hopelessly outdated as an architecture, but the instruction set is way more suited for teaching (and you should look at the classic procedure from Harris and Harris where they actually design a pipelined MIPS processor component by component to handle the instruction set).
Emu86/assembler/MIPS/key_words.py: https://github.com/gcallah/Emu86/blob/master/assembler/MIPS/...
Emu86.assembler.virtual_machine > MIPSMachine(VirtualMachine) , RISCVMachine(VirtualMachine) , IntelMachine(VirtualMachine) https://github.com/gcallah/Emu86/blob/b48725898f37dede3ab254...
Learn x in y minutes > "Where X=MIPS Assembly" https://learnxinyminutes.com/docs/mips/
- "Ask HN: Best blog tutorial explaining Assembly code?" (2023) re ASM, HLA, WASM, WASI and POSIX, and docker/containerd/nerdctl support for various WASM runtimes: https://news.ycombinator.com/item?id=38772493
- "Show HN: Tetris, but the blocks are ARM instructions that execute in the browser" (2023) https://news.ycombinator.com/item?id=37086102 ; the emu86 jupyter kernel supports X86, RISC, MIPS, and WASM; and the iarm jupyter kernel supports ARMv6
Rga: Ripgrep, but also search in PDFs, E-Books, Office documents, zip, etc.
To what extent does reading these formats accurately require the execution of code within the documents? In other words, not just stuff like zip expansion by a library dependency of rga, but for example macros inside office documents or JavaScript inside PDFs.
Note: I have no reason to believe such code execution is actually happening — so please don't take this as FUD. My assumption is that a secure design would involve running only external code and thus would sacrifice a small amount of accuracy, possibly negligible.
Also note that it's not necessarily safe to read these documents even if you don't intend on executing embedded code. For example, reading from pdfs uses poppler, which has had a few CVEs that could result in arbitrary code execution, mostly around image decoding. https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=poppler
(No shade to poppler intended, just the first tool on the list I looked at.)
Couldn't or shouldn't each parser be run in a container with systemd-nspawn or LXC or another container runtime? (Even if all it's doing is reading a file format into process space as NX data not code as the current user)
NX bit: https://en.wikipedia.org/wiki/NX_bit
Executable-space protection > Limitations mentions JITs and ROP: https://en.wikipedia.org/wiki/Executable-space_protection
mprotect(), VirtualAlloc[Ex] and VirtualProtect[Ex],
"NX bit: does it protect the stack?" https://security.stackexchange.com/questions/47807/nx-bit-do...
A word about systemd (2016)
A central point of their thesis is that Systemd folks bullied others into accepting it.
That's not really the case, though. The Debian folks had a very transparent discussion and vote about using it:
Related, it's also blatantly ignoring that it's the first software that even allows interoperability in its space, and was thus in huge demand by both developers and administrators.
(Edit: Other anti-systemd rant sites sometimes mention these obliquely as "upstream dependencies" and "workplace-mandated", while ignoring the fact that those people have reasons to demand those things)
But HN really doesn't need to litigate this all again.
what do you mean by interoperability?
in the bad old times, every linux distro implemented its init scripts in a slightly different way, often with distro specific tooling and commands for accomplishing most management tasks
systemd unit files require minimal modification by distro repackagers.
With SysV-init, it was/is necessary to modify /etc/init.d/servicenamed shell scripts to get consistent logging and respawning behavior. With SysV init it's not possible to add edges between things to say that networkd must be started before apached; there are 6 numeric runlevels and filenames like S86apache, which you then can't diff or debsums against because the package default is S80apache and Apache starts with A which alphabetically sorts before the N in networkd.
And then add cgroups and namespaces to everything in /etc/init.d please
There were and are other options other than systemd and SysV years prior in fact. OpenRC and upstart predate it and s6 was not much later.
It's possible to pipe stdout to syslog through logger by modifying the exec args. But I'd rather take journald's word for it.
Supervisord (and supervisorctl) are like s6. I don't think any of these support a config file syntax for namespaces, cgroups, seccomp, or SyscallFilter. Though it's possible to use process isolation with any init system by modifying the init script or config file to call the process with systemd-nspawn (which is like chroot, which most SysV-init scripts also fail to do at all, though a process can drop privs and chroot at init regardless of init system).
systemd works in containers now without weird setup. Though is it better to run a 'sidecar' container (for e.g. certbot) than to run multiple processes in a container?
The HTTP Query Method
There is currently no way to determine whether there's, for example, an application/json representation of a URL that returns text/html by default.
OPTIONS and PROPFIND don't get it.
There should be an HTTP method (or a .well-known url path prefix) to query and list every Content-Type available for a given URL.
From https://x.com/westurner/status/1111000098174050304 :
> So, given the current web standards, it's still not possible to determine what's lurking behind a URL given the correct headers? Seems like that could've been the first task for structured data on the internet
Servers can return something like:
Link: </foo>; rel="alternate" type="application/json"
You could return multiple of these as a response to a HEAD request for example.You could also use the Accept header in response to a HTTP OPTIONS request, or as part of a 415 error response.
Accept: application/json, text/html
https://httpwg.org/specs/rfc9110.html#field.accept (yes Accept can be used in responses).well-known is not a good place for this kind of thing. That discovery mechanism should only be used in situations where only a domainname is known, and a feature or endpoint needs to be discovered for a full domain. In most cases you want a link.
The building blocks for most of this stuff is certainly there. There's a lot of wheels being reinvented all the time.
> https://httpwg.org/specs/rfc9110.html#field.accept (yes Accept can be used in responses)
I think that would solve.
HTTP servers SHOULD send one or more Accept: {content_type} HTTP headers in the HTTP Response to an HTTP OPTIONS Request in order to indicate which content types can be requested from that path.
Yeah no way to ask for openapi schema definition.
https://github.com/schemaorg/schemaorg/issues/1423#issuecomm... :
> Examples of how to represent GraphQL, SPARQL, LDP, and SOLID APIs [as properties of one or more rdfs:subClassOf schema:WebAPI (as RDFa in HTML or JSON-LD)]
GraalPy – A high-performance embeddable Python 3 runtime for Java
Took a little digging to find that it targets 3.11. Didn’t see anything about a GIL. If you’re a Python person, don’t click the quick start link unless you want to look at some xml.
Python implementations naturally don't have any GIL in regards to JVM or CLR variants, there is no such thing on those platforms.
YAML and JSON have both tried to replicate the XML tooling experience, only worse.
Schemas, comments, parsing and schema conversions tools.
I think GraalPython does have a GIL, see https://github.com/oracle/graalpython/blob/master/docs/contr... - and if by "there is no such thing on those platforms" you mean JVM/CLR not having a GIL, C also does not have a GIL but CPython does.
"PEP 703 – Making the Global Interpreter Lock Optional in CPython" (2023) https://peps.python.org/pep-0703/
CPython built with --disable-gil does not have a GIL (as long as PYTHONGIL=0 and all loaded C extensions are built for --disable-gil mode) https://peps.python.org/pep-0703/#py-mod-gil-slot
"Intent to approve PEP 703: making the GIL optional" (2023) https://news.ycombinator.com/item?id=36913328#36917709 https://news.ycombinator.com/item?id=36913328#36921625
This is pretty beside the point. The point is that X not having a GIL doesn't inherently mean Python on X also doesn't have a GIL.
CPython does not have a GIL Global Interpreter Lock GC Garbage Collection phase with --gil-disabled. GraalVM does have a GIL, like CPython without --gil-disabled.
How CPython accomplished nogil in their - the original and reference - fork is described in the topical linked PEP 703.
Yes, I know. What I'm saying is that:
It's possible to have a language that doesn't have a GIL, which you implement Python in, but that Python implementation then has a GIL.
The point being that you can't say things like: Jython is written in Java so it doesn't have a GIL. CPython is written in C so doesn't have a GIL. And so on.
If this isn't clear, I apologize.
Oh okay. Yeah I would say that the Java GC and the ported CPython GIL are probably limits to the performance of any Python in Java implementation.
But are there even nogil builds of CPython C extensions on PyPi yet anyway.
Re: Ghidraal and various methods of Python in Java: https://news.ycombinator.com/item?id=36454485
Chain of Thought empowers transformers to solve inherently serial problems
In the words of an author:
"What is the performance limit when scaling LLM inference? Sky's the limit.
We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed. Remarkably, constant depth is sufficient.
http://arxiv.org/abs/2402.12875 (ICLR 2024)"
> We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many intermediate reasoning tokens as needed.
That seems like a bit of a leap here to make this seem more impressive than it is (IMO). You can say the same thing about humans, provided they are allowed to think across as many years/generations as needed.
Wake me up when a LLM figures out stable fusion or room temperature superconductors.
> You can say the same thing about humans, provided they are allowed to think across as many years/generations as needed.
Isn’t this a good thing since compute can be scaled so that the LLM can do generations of human thinking in a much shorter amount of time?
Say humans can solve quantum gravity in 100 years of thinking by 10,000 really smart people. If one AGI is equal to 1 really smart person. Scale enough compute for 1 million AGI and we can solve quantum gravity in a year.
The major assumption here is that transformers can indeed solve every problem humans can.
> Scale enough compute for 1 million AGI and we can solve quantum gravity in a year.
That is wrong, it misses the point. We learn from the environment, we don't secrete quantum gravity from our pure brains. It's a RL setting of exploration and exploitation, a search process in the space of ideas based on validation in reality. A LLM alone is like a human locked away in a cell, with no access to test ideas.
If you take child Einstein and put him on a remote island, and come back 30 years later, do you think he would impress you with is deep insights? It's not the brain alone that made Einstein so smart. It's also his environment that had a major contribution.
if you told child Einstein that light travels at a constant speed in all inertial frames and taught him algebra, then yes, he would come up with special relativity.
in general, an AGI might want to perform experiments to guide its exploration, but it's possible that the hypotheses that it would want to check have already been probed/constrained sufficiently. which is to say, a theoretical physicist might still stumble upon the right theory without further experiments.
Labeling of observations better than a list of column label strings at the top would make it possible to mine for insights in or produce a universal theory that covers what has been observed instead of the presumed limits of theory.
CSVW is CSV on the Web as Linked Data.
With 7 metadata header rows at the top, a CSV could be converted to CSVW; with URIs for units like metre or meter or feet.
If a ScholarlyArticle publisher does not indicate that a given CSV or better :Dataset that is :partOf an article is a :premiseTo the presented argument, a human grad student or an LLM needs to identify the links or textual citations to the dataset CSV(s).
Easy: Identify all of the pandas.read_csv() calls in a notebook,
Expensive: Find the citation in a PDF, search for the text in "quotation marks" and try and guess which search result contains the dataset premise to an article;
Or, identify each premise in the article, pull the primary datasets, and run an unbiased automl report to identify linear and nonlinear variance relations and test the data dredged causal chart before or after manually reading an abstract.
STORM: Get a Wikipedia-like report on your topic
Very cool! I asked it to create an article on the topic of my thesis and it was very good, but it lacked nuance and second order thinking i.e. here's the thing, what are the consequences of it and potential mitigations. It was able to pull existing thinking on a topic but not really synthesise a novel insight.
Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking.
From the paper it seems like this is only marginally better than the benchmark approach they used to compare against:
>Outline-driven RAG (oRAG), which is identical to RAG in outline creation, but
>further searches additional information with section titles to generate the article section by section
It seems like the key ingredients are:
- generating questions
- addressing the topic from multiple perspectives
- querying similar wikipedia articles (A high quality RAG source for facts)
- breaking the problem down by first writing an outline.
Which we can all do at home and swap out the wikipedia articles with our own data sets.
I was able to mimic this in GPT with out the RAG component with this custom instruction prompt, it does indeed write decent content, better than other writing prompts I have seen.
PROMPT: create 3 diverse personas who would know about the user prompt generate 5 questions that each persona would ask or clarify use the questions to create a document outline, write the document with $your_role as the intended audience.
Nyxpsi – A Next-Gen Network Protocol for Extreme Packet Loss
Re: information sharing by entanglement, CHSH, and the Bell test: https://news.ycombinator.com/item?id=41480957#41515663 :
> And all it takes to win the game is to transmit classical bits with digital error correction using hidden variables?
Chrome switching to NIST-approved ML-KEM quantum encryption
Fractran: Computer architecture based on the multiplication of fractions
I think the Wikipedia page (https://en.wikipedia.org/wiki/FRACTRAN) is easier for understanding Fractan.
By Church-Turing, is Fractran sufficient to host a Hilbert logic? https://en.wikipedia.org/wiki/Hilbert_system
Gigantic Wave in Pacific Ocean Was the Most Extreme 'Rogue Wave' on Record
> Scientists define a rogue wave as any wave more than twice the height of the waves surrounding it. The Draupner wave, for instance, was 25.6 meters tall, while its neighbors were only 12 meters tall.
> In comparison, the Ucluelet wave was nearly three times the size of its peers.
> The buoy that picked up the Ucluelet wave was placed offshore along with dozens of others by a research institute called MarineLabs in an attempt to learn more about hazards out in the deep.
> [...] For centuries, rogue waves were considered nothing but nautical folklore. It wasn't until 1995 that myth became fact.
ScholarlyArticle: "Generation mechanism and prediction of an observed extreme rogue wave" (2024) https://www.nature.com/articles/s41598-022-05671-4
Microsoft plan to end kernel-level anti-cheat could be massive for Linux Gaming
Personally, for Linux support, an offline-only mode with no anti-cheat would be fine.
We don't even need to compete online as others can.
Unfortunately [EA,] games like FIFA are unplayable even offline in Linux (with Proton WINE) due to anti-cheat FWIU.
EU has a new "preserving PC games" directive. Will any of these games work without versions of Operating Systems that there are no security patches for anymore?
The "Stop Destroying Videogames" petition is open through 2025-07-03: https://citizens-initiative.europa.eu/initiatives/details/20... :
> This initiative calls to require publishers that sell or license videogames to consumers in the European Union (or related features and assets sold for videogames they operate) to leave said videogames in a functional (playable) state.
OS-kernel-integrated anti-cheat makes a game unplayable offline today.
Those probably won't archive well.
VirtualBox supports a "virtual TPM" device so that Windows 11 will run in a VM.
Internet Archive has so many games preserved with DOSbox, in HTML5 and WASM years later.
We shouldn't need a 120 FPS 4K (4k120) sunshine server remote desktop for gaming with old games.
> Specifically, the initiative seeks to prevent the remote disabling of videogames by the publishers, before providing reasonable means to continue functioning of said videogames without the involvement from the side of the publisher.
The initiative does not seek to acquire ownership of said videogames, associated intellectual rights or monetization rights, neither does it expect the publisher to provide resources for the said videogame once they discontinue it while leaving it in a reasonably functional (playable) state.
New evidence upends contentious Easter Island theory, scientists say
Grounding AI in reality with a little help from Data Commons
> Retrieval Interleaved Generation (RIG): This approach fine-tunes Gemma 2 to identify statistics within its responses and annotate them with a call to Data Commons, including a relevant query and the model's initial answer for comparison. Think of it as the model double-checking its work against a trusted source.
> [...] Trade-offs of the RAG approach: [...] In addition, the effectiveness of grounding depends on the quality of the generated queries to Data Commons.
Gotta say, this kinda feels like "giving it correct data didn't work, so what if we did that twice?".
Like, two layers of duct tape are better than one.
Seems reasonable and I can believe it helps, just also seems like it doesn't do much to improve confidence in the system as a whole. Particularly since they're basically asking it to find things worth checking, then have it write the checking query, and have it interpret the results. When it's the thing that screwed it up in the first place.
eh, I think this is pretty reasonable and not a "hack". It matches what we do as people. I think there probably needs to be research into how to tell it when it doesn't know something, however.
I think if you remember that LLMs are not databases, but they do contain a super lossy-compressed version of (it's training) knowledge, this feels less like a hack. If you ask someone, "who won the World Cup in 2000?", they may say "I think it was X, but let me google it first". That person isn't screwed up, using tools isn't a failure.
If the context is a work setting, or somewhere that is data-centric, it totally makes sense to check it. Like a Chat Bot for a store, or company that is helping someone troubleshoot or research. Anything where it really obvious answers that are easy to learn from volumes of data ("what company makes the corolla?"), probably don't need fact checking as often, but why not have the system check its work?
Meanwhile, programming, writing prose, etc are not things you generally fact-check mid-way, and are things that can be "learned" well from statistical volume. Most programmers can get "pretty good" syntax on first try, and any dedicated syntax tool will get to basically 100%, and the same makes sense for an LLM.
This is similar to the difference between data dredging and scientific hypothesis testing.
'But what bias did we infer with [LLM knowledgebase] background research prior to formulating a hypothesis, and who measured?'
There are various methods of Inference: Inductive, Deductive, and Abductive
What are the limits of Null Hypothesis methods of scientific inquiry?
Better-performing “25519” elliptic-curve cryptography
> The x25519 algorithm also plays a role in post-quantum safe cryptographic solutions, having been included as the classical algorithm in the TLS 1.3 and SSH hybrid scheme specifications for post-quantum key agreement.
Really though? This mostly-untrue statement is the line that warrants adding hashtag #post-quantum-cryptography to the blogpost?
Actually, e.g. rustls added X25519Kyber768Draft00 support this year: https://news.ycombinator.com/item?id=41534500
/?q X25519Kyber768Draft00: https://www.google.com/search?q=X25519Kyber768Draft00
Kyber768 is the post-quantum algorithm in that example, not x25519.
From "OpenSSL 3.4 Alpha 1 Released with New Features" (8 days ago) https://news.ycombinator.com/item?id=41456447#41456774 :
> Someday there will probably be a TLS1.4/2.0 with PQ, and also FIPS-140 -4?
> Are there additional ways to implement NIST PQ finalist algos with openssl?
- open-quantum-safe/oqs-provider [implements mlkem512 through mlkem1024 and x25519_mlkem768]
Not sure what you're trying to say here . x25519 is objectively not PQC and never claimed to be, and this isn't debatable.
Defend against vampires with 10 gbps network encryption
A notebook with pandas would have had a df.plot().
9.71 Gbps with wg on a 10GBps link with sysctl tunings, custom MTUs,.
I had heard of token ring, but not 10BASE5: https://en.wikipedia.org/wiki/10BASE5
Microsoft's quantum-resistant cryptography is here
> With NIST releasing an initial group of finalized post-quantum encryption standards, we are excited to bring these into SymCrypt, starting with ML-KEM (FIPS 203, formerly Kyber), a lattice-based key encapsulation mechanism (KEM). In the coming months, we will incorporate ML-DSA (FIPS 204, formerly Dilithium), a lattice-based digital signature scheme and SLH-DSA (FIPS 205, formerly SPHINCS+), a stateless hash-based signature scheme.
> In addition to the above PQC FIPS standards, in 2020 NIST published the SP 800-208 recommendation for stateful hash-based signature schemes which are also resistant to quantum computers. As NIST themselves called out, these algorithms are not suitable for general use because their security depends on careful state management, however, they can be useful in specific contexts like firmware signing. In accordance with the above NIST recommendation we have added eXtended Merkle Signature Scheme (XMSS) to SymCrypt, and the Leighton-Micali Signature Scheme (LMS) will be added soon along with the other algorithms mentioned above.
microsoft/SymCrypt /CHANGELOG.md: https://github.com/microsoft/SymCrypt/blob/main/CHANGELOG.md
TIL that SymCrypt builds on Ubuntu: https://github.com/microsoft/SymCrypt/releases :
> Generic Linux AMD64 (x86-64) and ARM64 - built and validated on Ubuntu, but because SymCrypt has very few standard library dependencies, it should work on most Linux distributions
The Rustls TLS Library Adds Post-Quantum Key Exchange Support
- cf article about PQ (2024) https://blog.cloudflare.com/pq-2024/
- rustls-post-quantum: https://crates.io/crates/rustls-post-quantum
- rustls-post-quantum docs: https://docs.rs/rustls-post-quantum/latest/rustls_post_quant... :
> This crate provides a rustls::crypto::CryptoProvider that includes a hybrid [1], post-quantum-secure [2] key exchange algorithm – specifically X25519Kyber768Draft00.
> X25519Kyber768Draft00 is pre-standardization, so you should treat this as experimental. You may see unexpected interop failures, and the algorithm implemented here may not be the one that eventually becomes widely deployed.
> However, the two components of this key exchange are well regarded: X25519 alone is already used by default by rustls, and tends to have higher quality implementations than other elliptic curves. Kyber768 was recently standardized by NIST as ML-KEM-768.
"Module-Lattice-Based Key-Encapsulation Mechanism Standard" KEM: https://csrc.nist.gov/pubs/fips/203/final :
> The security of ML-KEM is related to the computational difficulty of the Module Learning with Errors problem. [...] This standard specifies three parameter sets for ML-KEM. In order of increasing security strength and decreasing performance, these are ML-KEM-512, ML-KEM-768, and ML-KEM-1024.
Breaking Bell's Inequality with Monte Carlo Simulations in Python
Hidden variable theory: https://en.wikipedia.org/wiki/Hidden-variable_theory
Bell test: https://en.wikipedia.org/wiki/Bell_test :
> To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency η [\eta], defined as the probability that a photodetector detects a photon that arrives at it. Anupam Garg and N. David Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of η > 2*sqrt(2)/2~= 0.83 is required for a loophole-free violation.[51] Later Philippe H. Eberhard showed that when using a partially entangled state a loophole-free violation is possible for η>2/3~=0.67 which is the optimal bound for the CHSH inequality.[53] Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for η>(sqrt(5)-1)/2~=0.62 [54]
CHSH inequality: https://en.wikipedia.org/wiki/CHSH_inequality
/sbin/chsh
Isn't it possible to measure the wake of a photon instead of measuring the photon itself; to measure the wake without affecting the boat that has already passed? And shouldn't a simple beam splitter be enough to demonstrate entanglement if there is an instrument with sufficient sensitivity to infer the phase of a passed photon?
This says that intensity is sufficient to read phase: https://news.ycombinator.com/item?id=40492160 :
> "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
>> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity
And all it takes to win the game is to transmit classical bits with digital error correction using hidden variables?
From "Violation of Bell inequality by photon scattering on a two-level emitter" https://news.ycombinator.com/item?id=40917761 ... From "Scientists show that there is indeed an 'entropy' of quantum entanglement" (2024) https://news.ycombinator.com/item?id=40396001#40396211 :
> IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)
That was probably the "Bell test" article, which - IIUC - does indeed indicate that if you can read 62% of the photons you are likely to find a loophole-free violation.
What is the photon detection rate in this and other simulators?
Show HN: Pyrtls, rustls-based modern TLS for Python
"PEP 543 – A Unified TLS API for Python" specified interfaces that would make it easier to use different TLS implementations: https://peps.python.org/pep-0543/#interfaces
alphaXiv: Open research discussion on top of arXiv
Hey alphaxiv, you won’t let me claim some of my preprints, because there’s no match with the email address. Which there can’t be, as we’re only listing a generic first.last@org addresses in the papers. Tried the claiming process twice, nothing happened. Not all papers are on Orcid, so that doesn’t help.
I think it’ll be hard growing a discussion platform, if there’s barriers of entry like that to even populate your profile.
How would you propose making claiming possible without the risk of hijacking/misrepresentation?
The only way I see this working is for paper authors to include their public keys in the paper; preferably as metadata and have them produce a signed message using their private key which allows them to claim the paper.
While the grandparent is understandably disappointed with the current implementation, relying on emails was always doomed from the start.
Given that the paper would have be changed regardless, including the full email address is a relatively easy solution. ORCID is probably easier than requiring public keys and a lot of journals already require them.
W3D Decentralized Identifiers are designed for this use case.
Decentralized identifier: https://en.wikipedia.org/wiki/Decentralized_identifier
W3C TR did-core: "DID Decentralized Identifiers 1.0": https://www.w3.org/TR/did-core/
W3C TR did-use-cases: "Use Cases and Requirements for Decentralized Identifiers" https://www.w3.org/TR/did-use-cases/
"Email addresses are not good 'permanent' identifiers for accounts" (2024) https://news.ycombinator.com/item?id=38823817#38831952
I'm sure that would work, but most researchers already have an ORCID are required to provide it other places anyway.
Charging lithium-ion batteries at high currents first increases lifespan by 50%
But are there risks and thus costs?
Initial Litium deactivation is 30 % compared to 9 % with slow formation loading.
How much capacity is lost as a result of this?
Lithium deactivation is inversely proportional to capacity. We could just add extra capacity to make up for it, though. From there, the battery would maintain capacity for a longer time than before.
Isn't there how much fire risk from charging a _ battery at higher than spec currents?
It's just the single inital charging that has to be at high current.
So for SAFETY then there also shouldn't there be battery fire containment [vessels] for the production manufacturing process?
Absolutely, but I hope there already are measures taken to prevent and contain battery fires during manufacturing.
Same origin: Quantum and GR from Riemannian geometry and Planck scale formalism
ScholarlyArticle: "On the same origin of quantum physics and general relativity from Riemannian geometry and Planck scale formalism" (2024) https://www.sciencedirect.com/science/article/pii/S092765052...
- NewsArticle: "Magical equation unites quantum physics, Einstein’s general relativity in a first" https://interestingengineering.com/science/general-relativit... :
> “We proved that the Einstein field equation from general relativity is actually a relativistic quantum mechanical equation,” the researchers note in their study.
> [...] To link them, the researchers developed a mathematical framework that “Redefined the mass and charge of leptons (fundamental particles) in terms of the interactions between the energy of the field and the curvature of the spacetime.”
> “The obtained equation is covariant in space-time and invariant with respect to any Planck scale. Therefore, the constants of the universe can be reduced to only two quantities: Planck length and Planck time,” the researchers note.
- NewsArticle: "Magical equation unites quantum physics, Einstein’s general relativity in a first" https://interestingengineering.com/science/general-relativit... :
> “We proved that the Einstein field equation from general relativity is actually a relativistic quantum mechanical equation,” the researchers note in their study.
the reddit discussion claiming this is AI nonsense: https://www.reddit.com/r/TheoreticalPhysics/comments/1fbij1e...
Someone could also or instead read:
"Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
From https://news.ycombinator.com/item?id=38871054 .. "Light and gravitational waves don't arrive simultaneously" (2023) https://news.ycombinator.com/item?id=38056295 :
>> In SQS (Superfluid Quantum Space), Quantum gravity has fluid vortices with Gross-Pitaevskii, Bernoulli's, and IIUC so also Navier-Stokes; so Quantum CFD (Computational Fluid Dynamics)
Newly Discovered Antibody Protects Against All Covid-19 Variants
> The technology used to isolate the antibody, termed Ig-Seq, gives researchers a closer look at the antibody response to infection and vaccination using a combination of single-cell DNA sequencing and proteomics.
/? ig-seq [ site:github.com ] : https://www.google.com/search?q=ig-seq+site%3Agithub.com
https://www.illumina.com/science/sequencing-method-explorer/... :
> Rep-Seq is a collective term for repertoire sequencing technologies. DNA sequencing of immunoglobulin genes (Ig-seq) and molecular amplification fingerprinting
> [ Ig-seq] is a targeted gDNA amplification method performed with primers complementary to the rearranged V-region gene (VDJ recombinant). Amplification of cDNA is then performed with the appropriate 5’ primers.
[deleted]
Ten simple rules for scientific code review
> Rule 1: Review code just like you review other elements of your research
> Rule 2: Don’t leave code review to the end of the project
> Rule 3: The ideal reviewer may be closer than you think
> Rule 4: Make it easy to review your code
> Rule 5: Do it in person and synchronously… A. Circulate code in advance. B. Provide necessary context. C. Ask for specific feedback if needed. D. Walk through the code. E. Gather actionable comments and suggestions
> Rule 6:…and also remotely and asynchronously
> Rule 7: Review systematically
> A. Run the code and aim to reproduce the results. B. Read the code through—first all of it, with a focus on the API. C. Ask questions. D. Read the details—focus on modularity and design, E. Read the details—focus on the math, F. Read the details—focus on performance, G. Read the details—focus on formatting, typos, comments, documentation, and overall code clarity
> Rule 8: Know your limits
> Rule 9: Be kind
> Rule 10: Reciprocate
Quantum error correction below the surface code threshold
"Quantum error correction below the surface code threshold" (2024) https://arxiv.org/abs/2408.13687 :
> Abstract: Quantum error correction provides a path to reach practical quantum computing by combining multiple physical qubits into a logical qubit, where the logical error rate is suppressed exponentially as more qubits are added. However, this exponential suppression only occurs if the physical error rate is below a critical threshold. In this work, we present two surface code memories operating below this threshold: a distance-7 code and a distance-5 code integrated with a real-time decoder. The logical error rate of our larger quantum memory is suppressed by a factor of Λ = 2.14 ± 0.02 when increasing the code distance by two, culminating in a 101-qubit distance-7 code with 0.143% ± 0.003% error per cycle of error correction. This logical memory is also beyond break-even, exceeding its best physical qubit's lifetime by a factor of 2.4 ± 0.3. We maintain below-threshold performance when decoding in real time, achieving an average decoder latency of 63 μs at distance-5 up to a million cycles, with a cycle time of 1.1 μs. To probe the limits of our error-correction performance, we run repetition codes up to distance-29 and find that logical performance is limited by rare correlated error events occurring approximately once every hour, or 3 × 109 cycles. Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms.
Cavity-Mediated Entanglement of Parametrically Driven Spin Qubits via Sidebands
From "Cavity-Mediated Entanglement of Parametrically Driven Spin Qubits via Sidebands" (2024) https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan... :
> We show that the sidebands generated via the driving fields enable highly tunable qubit-qubit entanglement using only ac control and without requiring the qubit and cavity frequencies to be tuned into simultaneous resonance. The model we derive can be mapped to a variety of qubit types, including detuning-driven one-electron spin qubits in double quantum dots and three-electron resonant exchange qubits in triple quantum dots. The high degree of nonlinearity inherent in spin qubits renders these systems particularly favorable for parametric drive-activated entanglement.
- Article: "Radical quantum computing theory could lead to more powerful machines than previously imagined" https://www.livescience.com/technology/computing/radical-qua... :
> Each qubit operates in a given frequency.
> These qubits can then be stitched together through quantum entanglement — where their data is linked across vast separations over time or space — to process calculations in parallel. The more qubits are entangled, the more exponentially powerful a quantum computer will become.
> Entangled qubits must share the same frequency. But the study proposes giving them "extra" operating frequencies so they can resonate with other qubits or work on their own if needed.
Is there an infinite amount of quantum computational resources in the future that could handle today's [quantum] workload?
Rpi-open-firmware: open-source VPU side bootloader for Raspberry Pi
> Additionally, there is a second-stage chainloader running on ARM capable of initializing eMMC, FAT, and the Linux kernel.
There's now (SecureBoot) UEFI firmware for Rpi3+, so grub and systemd-boot should work on Raspberry Pis: https://raspberrypi.stackexchange.com/questions/99473/why-is... :
> The Raspberry Pi is special that the primary (on-chip ROM) , secondary (bootcode.bin) and third bootloader (start.elf) are executed on its GPU, one chainloading the other. The instruction set is not properly documented and start.elf
> What can be done (as SuSE and Microsoft have demonstrated) is to replace kernel.img at will - even with a custom version of TianoCore (an open-source UEFI implementation) or U-Boot. This can then be used to start an UEFI-compatible GRUB2 or BOOTMGR binary.
"UEFI Secure Boot on the Raspberry Pi" (2023) https://news.ycombinator.com/item?id=35815382
AlphaProteo generates novel proteins for biology and health research
> Trained on vast amounts of protein data from the Protein Data Bank (PDB) and more than 100 million predicted structures from AlphaFold, AlphaProteo has learned the myriad ways molecules bind to each other. Given the structure of a target molecule and a set of preferred binding locations on that molecule, AlphaProteo generates a candidate protein that binds to the target at those locations.
Show HN: An open-source implementation of AlphaFold3
Hi HN - we’re the founders of Ligo Biosciences and are excited to share an open-source implementation of AlphaFold3, the frontier model for protein structure prediction.
Google DeepMind and their new startup Isomorphic Labs, are expanding into drug discovery. They developed AlphaFold3 as their model to accelerate drug discovery and create demand from big pharma. They already signed Novartis and Eli Lilly for $3 billion - Google’s becoming a pharma company! (https://www.isomorphiclabs.com/articles/isomorphic-labs-kick...)
AlphaFold3 is a biomolecular structure prediction model that can do three main things: (1) Predict the structure of proteins; (2) Predict the structure of drug-protein interactions; (3) Predict nucleic acid - protein complex structure.
AlphaFold3 is incredibly important for science because it vastly accelerates the mapping of protein structures. It takes one PhD student their entire PhD to do one structure. With AlphaFold3, you get a prediction in minutes on par with experimental accuracy.
There’s just one problem: when DeepMind published AlphaFold3 in May (https://www.nature.com/articles/s41586-024-07487-w), there was no code. This brought up questions about reproducibility (https://www.nature.com/articles/d41586-024-01463-0) as well as complaints from the scientific community (https://undark.org/2024/06/06/opinion-alphafold-3-open-sourc...).
AlphaFold3 is a fundamental advance in structure modeling technology that the entire biotech industry deserves to be able to reap the benefits from. Its applications are vast, including:
- CRISPR gene editing technologies, where scientists can see exactly how the DNA interacts with the scissor Cas protein;
- Cancer research - predicting how a potential drug binds to the cancer target. One of the highlights in DeepMind’s paper is the prediction of a clinical KRAS inhibitor in complex with its target.
- Antibody / nanobody to target predictions. AlphaFold3 improves accuracy on this class of molecules 2 fold compared to the next best tool.
Unfortunately, no companies can use it since it is under a non-commercial license!
Today we are releasing the full model trained on single chain proteins (capability 1 above), with the other two capabilities to be trained and released soon. We also include the training code. Weights will be released once training and benchmarking is complete. We wanted this to be truly open source so we used the Apache 2.0 license.
Deepmind published the full structure of the model, along with each components’ pseudocode in their paper. We translated this fully into PyTorch, which required more reverse engineering than we thought!
When building the initial version, we discovered multiple issues in DeepMind’s paper that would interfere with the training - we think the deep learning community might find these especially interesting. (Diffusion folks, we would love feedback on this!) These include:
- MSE loss scaling differs from Karras et al. (2022). The weighting provided in the paper does not downweigh the loss at high noise levels.
- Omission of residual layers in the paper - we add these back and see benefits in gradient flow and convergence. Anyone have any idea why Deepmind may have omitted the residual connections in the DiT blocks?
- The MSA module, in its current form, has dead layers. The last pair weighted averaging and transition layers cannot contribute to the pair representation, hence no grads. We swap the order to the one in the ExtraMsaStack in AlphaFold2. An alternative solution would be to use weight sharing, but whether this is done is ambiguous in the paper.
More about those issues here: https://github.com/Ligo-Biosciences/AlphaFold3
How this came about: we are building Ligo (YC S24), where we are using ideas from AlphaFold3 for enzyme design. We thought open sourcing it was a nice side quest to benefit the community.
For those on Twitter, there was a good thread a few days ago that has more information: https://twitter.com/ArdaGoreci/status/1830744265007480934.
A few shoutouts: A huge thanks to OpenFold for pioneering the previous open source implementation of AlphaFold We did a lot of our early prototyping with proteinFlow developed by Lisa at AdaptyvBio we also look forward to partnering with them to bring you the next versions! We are also partnering with Basecamp Research to supply this model with the best sequence data known to science. Matthew Clark (https://batisio.co.uk) for his amazing animations!
We’re around to answer questions and look forward to hearing from you!
Does this win the Folding@home competition, or is/was that a different goal than what AlphaFold3 and ligo-/AlphaFold3 already solve for?
Folding@Home https://en.wikipedia.org/wiki/Folding@home :
> making it the world's first exaflop computing system
Folding@home and protein structure prediction methods such as AlphaFold address related but different questions. The former intends to describe the process of a protein undergoing folding over time, while the latter tries to determine the most stable conformation of a protein (the end result of folding).
Folding@home uses Rosetta, a physics-based approach that is outperformed by deep learning methods such as AlphaFold2/3.
Folding@home uses Rosetta only to generate initial conformations[1], but the actual simulation is based on Markov State Models. Note that there is another distributed computing project for Rosetta, Rosetta@home.
[1]: https://foldingathome.org/dig-deeper/#:~:text=employing%20Ro...
Kids Should Be Taught to Think Logically
I agree with the article. But I'd go further and say that children should be introduced to, and taught, multiple styles of thinking. And they should understand the strengths and weaknesses of those styles for different tasks and situations.
But logical thought is definitely important. Judging by some people I encounter, even understanding the concepts of "necessary but not sufficient" or "X is-a Y does not mean that Y is-a X" would make a big difference.
There are plenty of lists of thinking styles. I doubt that any of them are exhaustive or discrete. For example:
Critical, creative, analytical, abstract, concrete, divergent/lateral, convergent/vertical.
Or
Synthesist, idealist, pragmatic, analytic, realist.
There are lots of options. My point was really about awareness of the different styles and their advantages/disadvantages.
Inductive, Deductive, Abductive Inference
Reason > Logical reasoning methods and argumentation: https://en.wikipedia.org/wiki/Reason#Logical_reasoning_metho...
Critical Thinking > Logic and rationality > Deduction, abduction and induction ; Critical thinking and rationality: https://en.wikipedia.org/wiki/Critical_thinking#Deduction,_a...
Logic: https://en.wikipedia.org/wiki/Logic :
> Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language.
Logical reasoning: https://en.wikipedia.org/wiki/Logical_reasoning
An argument can be Sound or Unsound; or Cogent or Uncogent.
Exercises I recall:
Underline or Highlight or Annotate the topic sentence / precis sentence (which can be the last sentence of an introductory paragraph),
Underline the conclusion,
Underline and label the premises; P1, P2, P3
Don't trust; Verify the logical form
If P1 then Q
P2
therefore Q
If P1, P2, and P3
P1 kinda
we all like ____
therefore Q
Logic puzzles,"Pete, it's a fool that looks for logic in the chambers of the human heart.", money x3, posturing
Coding on iPad using self-hosted VSCode, Caddy, and code-server
Is it ergonomic to code on a tablet without bci?
https://vscode.dev can connect to a remote vscode instance in a container e.g. over Remote Tunnels ; but browsers trap so many keyboard shortcuts.
Which container with code-server to run to connect to from vscode client?
You can specify a development container that contains code-server with devcontainer.json.
vscode, Codespaces and these tools support devcontainer.json, too:
coder/envbuilder: https://github.com/coder/envbuilder
loft-sh/devpod: https://github.com/loft-sh/devpod
lapce/lapdev: https://github.com/lapce/lapdev
JupyterHub and BinderHub can spawn containers that also run code-server. Though repo2docker and REES don't yet support devcontainer.json, they do support bringing your own Dockerfile.
> but browsers trap so many keyboard shortcuts.
As a result, unfortunately the F1 keyboard shortcut calls browser help not vscode help.
Aren't there browser-based RDP apps that don't mangle all of the shortcuts, or does it have to be fulscreen in a browser to support a VM-like escape sequence to return keyboard input to the host?
OpenSSL 3.4 Alpha 1 Released with New Features
The roadmap on the site is 404'ing and the openssl/web repo is archived?
The v3.4 roadmap entry PR mentions QUIC (which HTTP/3 requires)? https://github.com/openssl/web/pull/481/files
Someday there will probably be a TLS1.4/2.0 with PQ, and also FIPS-140 -4?
Are there additional ways to implement NIST PQ finalist algos with openssl?
open-quantum-safe/oqs-provider: https://github.com/open-quantum-safe/oqs-provider :
openssl list -signature-algorithms -provider oqsprovider
openssl list -kem-algorithms -provider oqsprovider
openssl list -tls-signature-algorithms
Gold nugget formation from earthquake-induced piezoelectricity in quartz
"Gold nugget formation from earthquake-induced piezoelectricity in quartz" (2024) https://www.nature.com/articles/s41561-024-01514-1
Piezoelectric effects are how crystal radios work; AM Amplitude Modulated, shortwave, longwave
Quartz is dielectric.
Quartz clocks, quartz voltimeter
From https://news.ycombinator.com/item?id=40859142 :
> Ancient lingams had Copper (Cu) and Gold (Au), and crystal FWIU.
> From "Can you pump water without any electricity?" https://news.ycombinator.com/item?id=40619745 :
>> - /? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
FWIU rotating the lingams causes vibrations which scare birds away.
Piezoelectricity: https://en.wikipedia.org/wiki/Piezoelectricity
"New "X-Ray Vision" Technique Sees Inside Crystals" (2024) https://news.ycombinator.com/item?id=40630832
"Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691 :
> Electron–electron interactions in high-mobility conductors can give rise to transport signatures resembling those described by classical hydrodynamics. Using a nanoscale scanning magnetometer, we imaged a distinctive hydrodynamic transport pattern—stationary current vortices—in a monolayer graphene device at room temperature.
"Goldene: New 2D form of gold makes graphene look boring" (2024) https://news.ycombinator.com/item?id=40079905
"Gold nanoparticles kill cancer – but not as thought" (2024) https://news.ycombinator.com/item?id=40819854 :
> Star-shaped gold nanoparticles up to 200nm kill at least some forms of cancer; https://news.ycombinator.com/item?id=40819854
Are there phononic excitations from earthquake-induced piezoelectric and pyroelectric effects?
Do (surface) plasmon polaritons SPhP affect the interactions between quartz and gold, given heat and vibration as from an earthquake?
"Extreme light confinement and control in low-symmetry phonon-polaritonic crystals" like quartz https://arxiv.org/html/2312.06805v2
Dublin Core, what is it good for?
Do regular search engine index DCMI dcterms:? Doe Google Scholar or Google Search index schema.org/CreativeWork yet?
Chrome 130: Direct Sockets API
Mozilla's position is that Direct Sockets would be unsafe and inconsiderate given existing cross-origin expectations FWIU: https://github.com/mozilla/standards-positions/issues/431
Direct Sockets API > Permissions Policy: https://wicg.github.io/direct-sockets/#permissions-policy
docs/explainer.md >> Security Considerations : https://github.com/WICG/direct-sockets/blob/main/docs/explai...
I applaud the Chrome team for implementing
Isolated Web Apps seems like it mitigates majority of the significant concerns Mozilla has.
Without support for Direct Sockets in Firefox, developers have JSONP, HTTP, WebSockets, and WebRTC.
Typically today, a user must agree to install a package that uses L3 sockets before they're using sockets other than DNS, HTTP, and mDNS. HTTP Signed Exchanges is one way to sign webapps.
IMHO they're way too confident in application sandboxing but we already know that we need containers, Gvisor or Kata, and container-selinux to isolate server processes.
Chrome and Firefox and Edge all have the same app sandbox now FWIU. It is contributed to pwn2own ever year. I don't think the application-level browser sandbox has a better record of vulns than containers or VMs.
So, IDK about trusting in-browser isolation features, or sockets with unsigned cross-domain policies.
OTOH things that would work with Direct Sockets IIUC: P2P VPN server/client, blind proxy relay without user confirmation, HTTP server, behind the firewall port scanner that uploads scans,
I can understand FF's position on Direct Sockets.
There used to be a "https server for apps" Firefox extension.
It is still necessary to install e.g Metamask to add millions of lines of unverified browser code and a JS/WASM interpretor to an otherwise secured Zero Trust chain. Without a Wallet Browser extension like Metamask explicitly installed, browsers otherwise must use vulnerable regular DNS instead of EDNS. Without Metamask installed, it's not possible for a compromised browser to hack at a blockchain without a relay because most blockchains specifically avoid regular HTTPS. Existing browsers do not support blockchain protocols without the user approving install of e.g. Metamask over PKI SSL.
FWIU there are many examples of people hijacking computers to mine PoW coins in JS or WASM, and we don't want that to be easier to do without requiring confirmation from easily-fooled users.
Browsers SHOULD indicate when a browser tab is PoW mining in the background as the current user in the background.
Are there downgrade attacks on this?
Don't you need HTTPS to serve the origin policy before switching to Direct Sockets anyway?
HTTP/3 QUIC is built on UDP. Can apps work with WS or WebRTC over HTTP/3 instead of sockets?
Edit: (now I can read the spec in question)
Thanks for your considered response. I will digest it a bit when I have time! :)
Show HN: A retro terminal text editor for GNU/Linux coded in C (C-edit)
I set about coding my own version of the classic MS-DOS EDIT.COM for GNU/Linux systems four years ago and it this is where the project is at... still rough around the edges but works well with Termux! :) Demo: https://www.youtube.com/watch?v=H7bneUX_kVA
QB64 is an EDIT.COM-style IDE and a compiler for QuickBasic .BAS programs: https://github.com/QB64Official/qb64#usage
There's a QBjs, for QuickBasic on the web.
There's a QB64 vscode extension: https://github.com/QB64Official/vscode
Textual has a MarkdownViewer TUI control with syntax highlighting and a file tree in a side panel like NERDtree, but not yet a markdown editor.
QuickBasic was my first programming language and EDIT.COM was my first IDE. I love going back down memory lane, thanks!
Same. `edit` to edit. These days perhaps not coincidentally I have a script called `e` for edit that opens vim: https://github.com/westurner/dotfiles/blob/develop/scripts/e
GORILLA.BAS! https://en.wikipedia.org/wiki/Gorillas_(video_game)
gorilla.bas with dosbox in html: https://archive.org/details/GorillasQbasic
rewritten with jquery: https://github.com/theraccoonbear/BrowserGORILLAS.BAS/blob/m...
Basically the same thing but for learning, except you can't change the constants in the simulator by editing the source of the game with Ctrl-C and running it with F5:
- PHET > Projectile Data Lab https://phet.colorado.edu/en/simulations/projectile-data-lab
- sensorcraft is like minecraft but in python with pyglet for OpenGL 3D; self.add_block(), gravity, ai, circuits: https://sensorcraft.readthedocs.io/en/stable/
PyTorch 2.4 Now Supports Intel GPUs for Faster Workloads
Which Intel GPUs? Is this for the Max Series GPUs and/or the Gaudi 3 launching in 3Q?
"Intel To Sunset First-Gen Max Series GPU To Focus On Gaudi, Falcon Shores Chips" (2024-05) https://www.crn.com/news/components-peripherals/2024/intel-s...
It’ll be something that supports oneAPI, so the PVC 1100/1500, the Flex GPUs, the A-series desktop GPUs (and B-series when it comes out). Not Gaudi 3 afaict, that has a custom toolchain (but Falcon Shores should be on oneAPI).
SDL3 new GPU API merged
SDL3 is still in preview, but the new GPU API is now merged into the main branch while SDL3 maintainers apply some final tweaks.
As far as I understand: the new GPU API is notable because it should allow writing graphics code & shaders once and have it all work cross-platform (including on consoles) with minimal hassle - and previously that required Unity or Unreal, or your own custom solution.
WebGPU/WGSL is a similar "cross-platform graphics stack" effort but as far as I know nobody has written console backends for it. (Meanwhile the SDL3 GPU API currently doesn't seem to support WebGPU as a backend.)
Why is SDL API needed vs gfx-rs / wgpu though? I.e. was there a need to make yet another one?
Having a C API like that is always nice. I don't wanna fight Rust.
WebGPU has a (mostly) standardized C API: https://github.com/webgpu-native/webgpu-headers
wgpu supports WebGPU: https://github.com/gfx-rs/wgpu :
> While WebGPU does not support any shading language other than WGSL, we will automatically convert your non-WGSL shaders if you're running on WebGPU.
That’s just for the shading language
The Rust wgpu project has an alternative C API which is identical (or at least closely matches, I haven't looked in detail at it yet) the official webgpu.h header. For instance all examples in here are written in C:
https://github.com/gfx-rs/wgpu-native/tree/trunk/examples
There's definitely also people using wgpu from Zig via the C bindings.
I found:
shlomnissan/sdl-wasm: https://github.com/shlomnissan/sdl-wasm :
> A simple example of compiling C/SDL to WebAssembly and binding it to an HTML5 canvas.
erik-larsen/emscripten-sdl2-ogles2: https://github.com/erik-larsen/emscripten-sdl2-ogles2 :
> C++/SDL2/OpenGLES2 samples running in the browser via Emscripten
IDK how much work there is to migrate these to SDL3?
Are there WASM compilation advantages to SDL3 vs SDL2?
There are rust SDL2 bindings: https://github.com/Rust-SDL2/rust-sdl2#use-sdl2render
use::sdl2render, gl-rs for raw OpenGL: https://github.com/Rust-SDL2/rust-sdl2?tab=readme-ov-file#op...
*sdl2::render
src/sdl2/render.rs: https://github.com/Rust-SDL2/rust-sdl2/blob/master/src/sdl2/...
SDL/test /testautomation_render.c: https://github.com/libsdl-org/SDL/blob/main/test/testautomat...
SDL is for gamedevs, it supports consoles, wgpu is not, it doesn't
SDL is for everyone. I use it for a terminal emulator because it’s easier to write something cross platform in SDL than it is to use platform native widgets APIs.
Can the SDL terminal emulator handle up-arrow /slash commands, and cool CLI things like Textual and IPython's prompt_toolkit readline (.inputrc) alternative which supports multi line editing, argument tab completion, and syntax highlighting?, in a game and/or on a PC?
I think you're confusing the roles of terminal emulator and shell. The emulator mainly hosts the window for a text-based application: print to the screen, send input, implement escape sequences, offer scrollback, handle OS copy-paste, etc. The features you mentioned would be implemented by the hosted application, such as a shell (which they've also implemented separately).
Does the SDL terminal emulator support enough of VT100, is it, to host an [in-game] console shell TUI with advanced features?
I'm not related to hnlmorg, but I'm assuming the project they refer to is mxtty [1], so check for yourself.
I’d love to see Raylib get an SDL GPU backend. I’d pick it up in a heartbeat.
raylib > "How to compile against the [SDL2] backend" https://github.com/raysan5/raylib/discussions/3764
Rust solves the problem of incomplete Kernel Linux API docs
This is one of the biggest advantages of the newer wave of more expressively typed languages like Rust and Swift.
They remove a lot of ambiguity in how something should be held.
Is this data type or method thread safe? Well I don’t need to go look up the docs only to find it’s not mentioned anywhere but in some community discussion. The compiler tells me.
Reviewing code? I don’t need to verify every use of a pointer is safe because the code tells me itself at that exact local point.
This isn’t unique to the Linux kernel. This is every codebase that doesn’t use a memory safe language.
With memory safe languages you can focus so much more on the implementation of your business logic than making sure all your codebases invariants are in your head at a given time.
It's not _just_ memory safety. In my experience, Rust is also liberating in the sense of mutation safety. With memory safe languages such as Java or Python or JavaScript, I must paranoidly clone stuff when passing stuff to various functions whose behaviour I don't intimately know of, and that is a source of constant stress for me.
Also newtype wrappers.
If you have code that deals e.g. with pounds and kilograms, Dollars and Euros, screen coordinates and window coordinates, plain text and HTML and so on, those values are usually encapsulated in safe wrapper structs instead of being passed as raw ints or floats.
This prevents you from accidentally passing the wrong kind of value into a function, and potentially blowing up your $125 million spacecraft[1].
I also find that such wrappers also make the code far more readable, as there's no confusion exactly what kind of value is expected.
[1] https://www.simscale.com/blog/nasa-mars-climate-orbiter-metr...
As much as I like Rust, I don’t think that it would have solved the Mars Climate Orbiter problem. That was caused by one party writing numbers out into a CSV file in one unit, and a different party reading the CSV file but assuming that the numbers were in a different unit. Both parties could have been using Rust, and using types to encode physical units, and the problem could still have happened.
W3C CSVW supports per-column schema.
Serialize a dict containing a value with uncertainties and/or Pint (or astropy.units) and complex values to JSON, then read it from JSON back to the same types. Handle datetimes, complex values, and categoricals
"CSVW: CSV on the Web" https://github.com/jazzband/tablib/issues/305
7 Columnar metadata header rows for a CSV: column label, property URI, datatype, quantity/unit, accuracy, precision, significant figures https://wrdrd.github.io/docs/consulting/linkedreproducibilit...
CSV on the Web: A Primer > 6. Advanced Use > 6.1 How do you support units of measure? https://www.w3.org/TR/tabular-data-primer/#units-of-measure
You can specify units in a CSVW file with QUDT, an RDFS schema and vocabulary for Quantities, Units, Dimensions, and Types
Schema.org has StructuredValue and rdfs:subPropertyOf like QuantitativeValue and QuantitativeValueDistribution: https://schema.org/StructuredValue
There are linked data schema for units, and there are various in-code in-RAM typed primitive and compound type serialization libraries for various programming languages; but they're not integrated, so we're unable to share data with units between apps and fall back to CSVW.
There are zero-copy solutions for sharing variables between cells of e.g. a polyglot notebook with input cells written in more than one programing language without reshaping or serializing.
Sandbox Python notebooks with uv and marimo
Package checksums can be specified per-platform in requirements.txt and Pipfile.lock files. Is it advisable to only list package names, in TOML that can't be parsed from source comments with the AST parser?
.ipynb nbformat inlines binary data outputs from e.g. _repr_png_() as base64 data, rather than delimiting code and binary data in .py files.
One file zipapp PEX Python Executables are buildable with Twitter Pants build, Buck, and Bazel. PEX files are executable ZIP files with a Python header.
There's a GitHub issue about a Markdown format for notebooks because percent format (`# %%`) and myst-nb formats don't include outputs.
There's also GitHub issue about Jupytext storing outputs
Containers are sandboxes. repo2docker with repo2podman sandboxes a git repo with notebooks by creating a container with a recent version of the notebook software on top.
How does your solution differ from ipyflow and rxpy, for example? Are ipywidgets supported or supportable?
> Is it advisable to only list package names, in TOML that can't be parsed from source comments with the AST parser?
This at the top of a notebook is less reproducible and less secure than a requirements.txt with checksums or better:
%pip install ipytest pytest-cov jupyterlab-miami-nights
%pip install -q -r requirements.txt
%pip?
%pip --help
But you don't need to run pip every time you run a notebook, so it's better to comment out the install steps. But then it requires manual input to run the notebook in a CI task, if it doesn't install everything in a requirements.txt and/or environment.yml or [Jupyter REES] first before running the notebook like repo2docker: #%pip install -r requirements.txt
#!pip --help
#!mamba env update -f environment.yml
#!pixi install --manifest-path pyproject.toml_or_pixi.toml
Lucee: A light-weight dynamic CFML scripting language for the JVM
Similar to ColdFusion CFML and Open Source: ZPT Zope Templates: https://zope.readthedocs.io/en/latest/zopebook/ZPT.html
https://zope.readthedocs.io/en/latest/zopebook/AppendixC.htm... :
> Zope Page Templates are an HTML/XML generation tool. This appendix is a reference to Zope Page Templates standards: Template Attribute Language (TAL), TAL Expression Syntax (TALES), and Macro Expansion TAL (METAL).
> TAL Overview: The Template Attribute Language (TAL) standard is an attribute language used to create dynamic templates. It allows elements of a document to be replaced, repeated, or omitted
Repoze is Zope backwards. PyPI is built on the Pyramid web framework, which comes from repoze.bfg, which was written in response to Zope2 and Plone and Zope3.
Chameleon ZPT templates with Pyramid pyramid.renderers.render_to_response(request): https://docs.pylonsproject.org/projects/pyramid/en/latest/na...
FCast: Casting Made Open Source
How does FCast differ from Matter Casting?
https://news.ycombinator.com/item?id=41171060&p=2#41172407
"What Is Matter Casting and How Is It Different From AirPlay or Chromecast?" (2024) https://www.howtogeek.com/what-is-matter-casting-and-how-is-... :
> You can also potentially use the new casting standard to control some of your TV’s functions while casting media on it, a task at which both AirPlay and Chromecast are somewhat limited.
Feature ideas: PIP Picture-in-Picture, The ability to find additional videos and add to a [queue] playlist without stopping the playing video
Instead of waiting, hoping that big companies will implement the standard. Just make it as easy as possible to adopt by having receivers for all platforms and making client libraries that can cast to AirPlay, Chromecast, FCast and others seamlessly.
Doest current app support AirPlay and Chromecast as different receivers/backends? Websites doesn't mention anything about it. Is there any plan also for iOS app?
Some feedback: I would also add dedicated buttons for downloading MacOS and Windows binaries - for typical users gitlab button will be to scary. Website also not clear if there is and SDK for developers (the one that supports also airplay and chromecast) and what languages bindings it supports.
GitHub has package repos for hosting package downloads.
A SLSA Builder or Generator can sign packages and container images with sigstore/cosign.
It's probably also possible to build and sign a repo metadata index with GitHub release attachment URLs and host that on GitHub Pages, but at scale to host releases you need a CDN and release signing keys to sign the repo metadata, and clients that update only when the release attachment signature matches the per-release per-platform key; but the app store does that for you
Show HN: bpfquery – experimenting with compiling SQL to bpf(trace)
Hello! The last few weeks I've been experimenting with compiling sql queries to bpftrace programs and then working with the results. bpfquery.com is the result of that, source available at https://github.com/zmaril/bpfquery. It's a very minimal sql to bpftrace compiler that lets you explore what's going on with your systems. It implements queries, expressions, and filters/wheres/predicates, and has a streaming pivot table interface built on https://perspective.finos.org. I am still figuring out how to do windows, aggregations and joins though, but the pivot table interface actually lets you get surprisingly far. I hope you enjoy it!
RIL about how the ebpf verifier attempts to prevent infinite loops given rule ordering and rewriting transformations.
There are many open query planners; maybe most are hardly reusable.
There's a wasm-bpf; and also duckdb-wasm, sqlite in WASM with replication and synchronization, datasette-lite, JupyterLite
wasm-bpf: https://github.com/eunomia-bpf/wasm-bpf#how-it-works
Does this make databases faster or more efficient? Is there process or query isolation?
The Surprising Cause of Qubit Decay in Quantum Computers
> The transmission of supercurrents is made possible by the Josephson effect, where two closely spaced superconducting materials can support a current with no applied voltage. As a result of the study, previously unattributed energy loss can be traced to thermal radiation originating at the qubits and propagating down the leads.
> Think of a campfire warming someone at the beach – the ambient air stays cold, but the person still feels the warmth radiating from the fire. Karimi says this same type of radiation leads to dissipation in the qubit.
> This loss has been noted before by physicists who have conducted experiments on large arrays of hundreds of Josephson junctions placed in circuit. Like a game of telephone, one of these junctions would seem to destabilize the rest further down the line.
ScholarlyArticle: "Bolometric detection of Josephson radiation" (2024) https://www.nature.com/articles/s41565-024-01770-7
"Computer Scientists Prove That Heat Destroys Quantum Entanglement" (2024) https://news.ycombinator.com/item?id=41381849
The Future of TLA+ [pdf]
A TLA+ alternative people might find curious.
What are other limits and opportunities for TLA+ and similar tools?
Limits of TLA+
- It cannot compile to working code
- Steep learning curve
Opportunities for TLA+
- Helps you understand complex abstractions & systems clearly.
- It's extremely effective at communicating the components that make up a system with others.
Let get give you a real practical example.
In the AI models there is this component called a "Transformer". It under pins ChatGPT (the "T" in ChatGPT).
If you are to read the 2018 Transfomer paper "Attention is all you need".
They use human language, diagrams, and mathematics to describe their idea.
However if your try to build you own "Transformer" using that paper as your only resource your going to struggle interpreting what they are saying to get working code.
Even if you get the code working, how sure are you that what you have created is EXACTLY what the authors are talking about?
English is too verbose, diagrams are open to interpretation & mathematics is too ambiguous/abstract. And already written code is too dense.
TLA+ is a notation that tends to be used to "specify systems".
In TLA+ everything is a defined in terms of a state machine. Hardware, software algorithms, consensus algorithms (paxos, raft etc).
So why TLA+?
If something is "specified" in TLA+;
- You know exactly what it is — just by interpreting the TLA+ spec
- If you have an idea to communicate. TLA+ literate people can understand exactly what your talking about.
- You can find bugs in an algorithms, hardware, proceseses just by modeling them in TLA+. So before building Hardware or software you can check it's validity & fix flaws in its design before committing expensive resources only to subsequently find issues in production.
Is that a practical example? Has anyone specified a transformer using TLA+? More generally, is TLA+ practical for code that uses a lot of matrix multiplication?
The most practical examples I’m aware of are the usage of TLA+ to specify systems at AWS: https://lamport.azurewebsites.net/tla/formal-methods-amazon....
From "Use of Formal Methods at Amazon Web Services" (2014) https://lamport.azurewebsites.net/tla/formal-methods-amazon.... :
> What Formal Specification Is Not Good For: We are concerned with two major classes of problems with large distributed systems: 1) bugs and operator errors that cause a departure from the logical intent of the system, and 2) surprising ‘sustained emergent performance degradation’ of complex systems that inevitably contain feedback loops. We know how to use formal specification to find the first class of problems. However, problems in the second category can cripple a system even though no logic bug is involved. A common example is when a momentary slowdown in a server (perhaps due to Java garbage collection) causes timeouts to be breached on clients, which causes the clients to retry requests, which adds more load to the server, which causes further slowdown. In such scenarios the system will eventually make progress; it is not stuck in a logical deadlock, livelock, or other cycle. But from the customer's perspective it is effectively unavailable due to sustained unacceptable response times. TLA+ could be used to specify an upper bound on response time, as a real-time safety property. However, our systems are built on infrastructure (disks, operating systems, network) that do not support hard real-time scheduling or guarantees, so real-time safety properties would not be realistic. We build soft real-time systems in which very short periods of slow responses are not considered errors. However, prolonged severe slowdowns are considered errors. We don’t yet know of a feasible way to model a real system that would enable tools to predict such emergent behavior. We use other techniques to mitigate those risks.
Delay, cycles, feedback; [complex] [adaptive] nonlinearity
Formal methods including TLA+ also can't/don't prevent or can only workaround side channels in hardware and firmware that is not verified. But that's a different layer.
> This raised a challenge; how to convey the purpose and benefits of formal methods to an audience of software engineers? Engineers think in terms of debugging rather than ‘verification’, so we called the presentation “Debugging Designs” [8] . Continuing that metaphor, we have found that software engineers more readily grasp the concept and practical value of TLA+ if we dub it:
Exhaustively testable pseudo-code
> We initially avoid the words ‘formal’, ‘verification’, and ‘proof’, due to the widespread view that formal
methods are impractical. We also initially avoid mentioning what the acronym ‘TLA’ stands for, as doing
so would give an incorrect impression of complexity.Isn't there a hello world with vector clocks tutorial? A simple, formally-verified hello world kernel module with each of the potential methods would be demonstrative, but then don't you need to model the kernel with abstract distributed concurrency primitives too?
From https://news.ycombinator.com/item?id=40980370 ;
> - [ ] DOC: learnxinyminutes for tlaplus
> TLAplus: https://en.wikipedia.org/wiki/TLA%2B
> awesome-tlaplus > Books, (University) courses teaching (with) TLA+: https://github.com/tlaplus/awesome-tlaplus#books
FizzBee, Nagini, deal-solver, z3, dafny; https://news.ycombinator.com/item?id=39904256#39938759 ,
"Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024) https://news.ycombinator.com/item?id=40680722
awesome-safety-critical:
Computer Scientists Prove That Heat Destroys Quantum Entanglement
This is really not a surprise. And I'd like to clarify more misconceptions of modern science:
It isn't their identity the atoms give up, their hyperdimensional vibration synchronizes.
This is puzzling and spooky enough to warrant all of those flavored adjectives because science is convinced the Universe is smallest states built up into our universe, when in actuality the Universe is a singularity of potential which distributes itself (over nothing), and state continuously resolves through constructive and destructive interference (all space/time manifestation), probably bound by spin characteristic (like little knots which cannot unwind themselves.)
As universal potential exists outside of space and time (giving rise to it, an alt theory to the big bang), when particles are synchronizes (at any distance) the disposition (not identity), of their potentials are bound (identity would be the localized existential particle). Any destructive interference upon the hyperdimensional vibration will destroy the entanglement.
The domain to be explored is, what can we do with constructive interference?
Modern science worshipers will have to bite on this one, and admit their sacred axioms are wrong.
So, modulating thermal insulation of (a non-superconducting and non-superfluidic or any) quantum simulator results in loss of entanglement.
How, then, can entanglement across astronomical distances occur without cooler temps the whole way there, if heat destroys all entanglement?
Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
Btw, entangled swarms can be re-entangled over and over and over. The same entangled scope.
The entanglement is the vibrational synchrony in the zero dimensional Universal Potential (or just capacity for existential Potential to sound less grandeur.)
So everything at that vibrational axis will bias to the same disposition.
There are more vibrational axis than atoms in the Universe, these are the purely random Q we expect when made decoharent.
Synchronized, our technology will come by how we may constructively and destructively interfere with the informational content of the "disposition". Any purturbence is informational , and I think you can broadcast analog motion pictures, in hyperdimensional clarity with sound and warm lighting; qubits are irrelevant and a dead end.
Quantum holography is a sieve of some sort, where shadows may be cast upon extradimentional walls.
Hash collisions may be found by superimposing the probabilities factored by their algorithms such that constructive and destructive interference reduce the multidimensional model to the smallest probable sets.
Cyphers may be broken by combining constructive key attempts (thousands of millions at a time?) in a "singular domain" with the hash collision solution.
Heat at some level, not every level. Same goes for magnets or heck, even biological specimens.
Computer haxor discovers that heat destroys all signs of life in organic material.
Sensitive apparatus requires insulation.
98% accuracy in predicting diseases by the colour of the tongue
"Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
> This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%, while the NB algorithm had the lowest accuracy, with 91.43%
"Development of Deep Ensembles to Screen for Autism and Symptom Severity Using Retinal Photographs" (2023) https://jamanetwork.com/journals/jamanetworkopen/fullarticle... ; 100% accuracy where a recently-published best method with charts is 80% accurate: "Machine Learning Prediction of Autism Spectrum Disorder from a Minimal Set of Medical and Background Information" (2024) https://jamanetwork.com/journals/jamanetworkopen/fullarticle... https://github.com/Tammimies-Lab/ASD_Prediction_ML_Rajagopal...
I don't think any of the medical imaging NN training projects have similar colorspace analysis.
The possibilities for dark matter have shrunk
There is no dark matter in theories of superfluid quantum gravity.
Dirac later went to Dirac sea. Gödel had already proposed dust solutions, which are fluidic.
PBS Spacetime has a video out now on whether gravity is actually random. I don't know whether it addresses theories of superfluid quantum gravity.
>>> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
Database “sharding” came from Ultima Online? (2009)
Wikipedia has:
Shard (database architecture) > Etymology: https://en.wikipedia.org/wiki/Shard_(database_architecture)#...
Partition > Horizontal partitioning --> Sharding: https://en.wikipedia.org/wiki/Partition_(database)
Database scalability > Techniques > Partitioning: https://en.wikipedia.org/wiki/Database_scalability
Network partition: https://en.wikipedia.org/wiki/Network_partition
why is every social media network now just "post something with varying levels of correctness because it farms engagement"
it's so exhausting needing to just read comments to get the actual, real truth
Do you realize that the linked Wikipedia post agrees with the article? It lists Ultima Online as one of two likely sources for the term "sharding."
Brain found to store three copies of every memory
I think the evidence for this should be extraordinary because it is so apparently unlikely.
Why would the brain store multiple copies of a memory? It’s so inefficient.
On a small level computers do that: cpu cache (itself 3 levels), GPU memory (optional), main memory, hdd cache.
The reason why animal brains would store multiple copies of a memory is functional proximity. You need memory near the limbic system for neurotic processing and fear conditioning. Long term storage is near the brain stem so that it can eventually bleed into the cerebellum to become muscle memory. There is memory storage near the frontal lobes so that people can reason about with their advanced processors like speech parsing, visual cortex, decision bias, and so forth.
OT ScholarlyArticle: "Divergent recruitment of developmentally defined neuronal ensembles supports memory dynamics" (2024) https://www.science.org/doi/10.1126/science.adk0997
"Our Brains Instantly Make Two Copies of Each Memory" (2017) https://www.pbs.org/wgbh/nova/article/our-brains-instantly-m... :
> We might be wrong. New research suggests that our brains make two copies of each memory in the moment they are formed. One is filed away in the hippocampus, the center of short-term memories, while the other is stored in cortex, where our long-term memories reside.
From https://www.psychologytoday.com/us/blog/the-athletes-way/201... :
> Surprisingly, the researchers found that long-term memories remain "silent" in the prefrontal cortex for about two weeks before maturing and becoming consolidated into permanent long-term memories.
ScholarlyArticle: "Engrams and circuits crucial for systems consolidation of a memory" (2017) https://www.science.org/doi/10.1126/science.aam6808
So that makes four (4) copies of each memory in the brain if you include the engram cells in the prefrontal cortex.
What is the survival advantage to redundant, resilient recall; why do brains with such traits survive and where and when in our evolutionary lineage did such complexity arise?
"Spatial Grammar" in DNA: Breakthrough Could Rewrite Genetics Textbooks
ScholarlyArticle: "Position-dependent function of human sequence-specific transcription factors" (2024) https://www.nature.com/articles/s41586-024-07662-z
Celebrating 6 years since Valve announced Steam Play Proton for Linux
I've seen it joked that with Proton, Win32 is a good stable ABI for gaming on Linux.
Given that, and that I'm most comfortable developing on Linux and in C++, does anyone have a good toolchain recommendation for cross compiling C++ to Windows from Linux? (Ideally something that I can point CMake at and get a build that I can test in Proton.)
MinGW works well (e.g. mingw-w64 in Debian/Ubuntu). It works well with CMake, you just need to pass CMake flags like
-DCMAKE_CXX_COMPILER=x86_64-w64-mingw32-c++ -DCMAKE_C_COMPILER=x86_64-w64-mingw32-c
Something like SDL or Raylib is useful for cross platform windowing and sound if you are writing gamesHey, TuxMath is written with SDL, though there's a separate JS port now IIUC.
/? mingw: https://github.com/search?q=mingw&type=repositories
The msys2/MINGW-packages are PKGBUILD packages: https://github.com/msys2/MINGW-packages :
> Package scripts for MinGW-w64 targets to build under MSYS2.
> [..., SDL, gles, glfw, egl, glbinding, cargo-c, gtest, cppunit, qt5, gtk4, icu, ki-i18n-qt5, SDL2_pango, jack2, gstreamer, ffmpeg, blender, gegl, gnome-text-editor, gtksourceview, kdiff3, libgit2, libusb, libressl, libsodium, libserialport, libslirp, hugo]
PKGBUILD is a bash script packaging spec from Arch, which builds packages with makepkg.
msys2/setup-msys2: https://github.com/msys2/setup-msys2:
> setup-msys2 is a GitHub Action (GHA) to setup an MSYS2 environment (i.e. MSYS, MINGW32, MINGW64, UCRT64, CLANG32, CLANG64 and/or CLANGARM64 shells)
Though, if you're writing a 3d app/game, e.g. panda3d already builds for Windows, Mac, and Linux; and pygbag compiles panda3d to WASM, and there's also harfang-wasm, and TIL about leptos is more like React in Rust and should work for GUI apps, too.
panda3d > Building applications: https://docs.panda3d.org/1.11/python/distribution/building-b...
https://github.com/topics/mingw :
> nCine, win-sudo, drmingw debugger,
mstorsjo/llvm-mingw: https://github.com/mstorsjo/llvm-mingw :
> Address Sanitizer and Undefined Behaviour Sanitizer, LLVM Control Flow Guard -mguard=cf ; i686, x86_64, armv7 and arm64
msys2/MINGW-packages: "[Wish] Add 3D library for Python" https://github.com/msys2/MINGW-packages/issues/21325
panda3d/panda3d: https://github.com/panda3d/panda3d
Before Steam for Linux, there was tuxmath.
tux4kuds/tuxmath//mingw/ has CodeBlocks build config with mingw32 fwics: https://github.com/tux4kids/tuxmath/blob/master/mingw/tuxmat...
Code::Blocks: https://en.m.wikipedia.org/wiki/Code::Blocks
There's a CMake build: https://github.com/tux4kids/tuxmath/blob/master/CMakeLists.t...
But it says the autotools build is still it; configure.ac for autoconf and Makefile.am for automake
SDL supports SVG since SDL_image 2.0.2 with IMG_LoadSVG_RW() and since SDL_image 2.6.0 with IMG_LoadSizedSVG_RW: https://wiki.libsdl.org/SDL2_image/IMG_LoadSizedSVG_RW
conda-forge has SDL on Win/Mac/Lin.
conda-forge/sdl2-feedstock: https://github.com/conda-forge/sdl2-feedstock
emscripten-forge does not yet have SDL or .*gl.* or tuxmath or bash or busybox: https://github.com/emscripten-forge/recipes/tree/main/recipe...
conda-forge/panda3d-feedstock / recipe/meta.yaml builds panda3d on Win, Mac, Lin, and Github Actions containers: https://github.com/conda-forge/panda3d-feedstock/blob/main/r... :
conda install -c conda-forge panda3d # mamba install panda3d # miniforge
# pip install panda3d
Panda3d docs > Distributing Panda3D Applications > Third-party dependencies lists a few libraries that I don't think are in the MSYS or conda-forge package repos:
https://docs.panda3d.org/1.10/python/distribution/thirdparty...What about math games on open source operating systems, Steam?
A Manim renderer for [game engine] would be cool for school, and cool for STEM.
"Render and interact with through Blender, o3de, panda3d ManimCommunity/manim#3362" https://github.com/ManimCommunity/manim/issues/3362
GitHub Named a Leader in the Gartner First Magic Quadrant for AI Code Assistants
Gartner "Magic Quadrant for AI Code Assistants" (2024) https://www.gartner.com/doc/reprints?id=1-2IKO4MPE&ct=240819...
Additional criteria for assessing AI code assistants from https://news.ycombinator.com/item?id=40478539 re: Text-to-SQL bemchmarks :
codefuse-ai/Awesome-Code-LLM > Analysis of AI-Generated Code, Benchmarks: https://github.com/codefuse-ai/Awesome-Code-LLM :
> 8.2. Benchmarks: * Integrated Benchmarks, Program Synthesis, Visually Grounded Program Synthesis, Code Reasoning and QA, Text-to-SQL, Code Translation, Program Repair, Code Summarization, Defect/Vulnerability Detection, Code Retrieval, Type Inference, Commit Message Generation, Repo-Level Coding*
OT did not assess:
Aider: https://github.com/paul-gauthier/aider :
> Aider works best with GPT-4o & Claude 3.5 Sonnet and can connect to almost any LLM.
> Aider has one of the top scores on SWE Bench. SWE Bench is a challenging software engineering benchmark where aider solved real GitHub issues from popular open source projects like django, scikitlearn, matplotlib, etc
SWE Bench benchmark: https://www.swebench.com/
A carbon-nanotube-based tensor processing unit
"A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
> Abstract: The growth of data-intensive computing tasks requires processing units with higher performance and energy efficiency, but these requirements are increasingly difficult to achieve with conventional semiconductor technology. One potential solution is to combine developments in devices with innovations in system architecture. Here we report a tensor processing unit (TPU) that is based on 3,000 carbon nanotube field-effect transistors and can perform energy-efficient convolution operations and matrix multiplication. The TPU is constructed with a systolic array architecture that allows parallel 2 bit integer multiply–accumulate operations. A five-layer convolutional neural network based on the TPU can perform MNIST image recognition with an accuracy of up to 88% for a power consumption of 295 µW. We use an optimized nanotube fabrication process that offers a semiconductor purity of 99.9999% and ultraclean surfaces, leading to transistors with high on-current densities and uniformity. Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.
1 TOPS/W/s
"Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024) https://news.ycombinator.com/item?id=40719725
"Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490
Or CNT Carbon Nanotubes, or TWCNT Twisted Carbon Nanotubes.
From "Rivian reduced electrical wiring by 1.6 miles and 44 pounds" (2024) https://news.ycombinator.com/item?id=41210021 :
> Are there yet CNT or TWCNT Twisted Carbon Nanotube substitutes for copper wiring?
"Twisted carbon nanotubes store more energy than lithium-ion batteries" (2024? https://news.ycombinator.com/item?id=41159421
- NewsArticle about the ScholarlyArticle: "The first tensor processor chip based on carbon nanotubes could lead to energy-efficient AI processing" (2024) https://techxplore.com/news/2024-08-tensor-processor-chip-ba...
How to build a 50k ton forging press
> By the early 2000s, parts from the heavy presses were in every U.S. military aircraft in service, and every airplane built by Airbus and Boeing.
>The savings on a heavy bomber was estimated to be even greater, around 5-10% of its total cost; savings on the B-52 alone were estimated to be greater than the entire cost of the Heavy Press Program.
These are wild stats.
Great article! I was fascinated to learn about the Heavy Press program for the first time, here on HN[1] a month ago, and am glad more about it is being posted.
It makes me think: what other processes could redefine an industry or way of thinking/designing if taken a step further? We had forging and extrusion presses … but huge, high pressure ones changed the game entirely.
> It makes me think: what other processes could redefine an industry or way of thinking/designing if taken a step further
Pressure-injection molded hemp plastic certainly meets spec for automotive and aerospace applications.
"Plant-based epoxy enables recyclable carbon fiber" (2022) [that's stronger than steel and lighter than fiberglass] https://news.ycombinator.com/item?id=30138954 ... https://news.ycombinator.com/item?id=37560244
Silica aerogels are dermally abrasive. Applications for non-silica aerogels - for example hemp aerogels - include thermal insulation, packaging, maybe upholstery fill.
There's a new method to remove oxygen from Titanium: "Cheap yet ultrapure titanium metal might enable widespread use in industry" (2024) https://news.ycombinator.com/item?id=40768549
"Electric recycling of Portland cement at scale" (2024) https://www.nature.com/articles/s41586-024-07338-8 ... "Combined cement and steel recycling could cut CO2 emissions" https://news.ycombinator.com/item?id=40452946
"Researchers create green steel from toxic [aluminum production waste] red mud in 10 minutes" (2024) https://newatlas.com/materials/toxic-baulxite-residue-alumin...
There are many new imaging methods for quality inspection of steel and other metals and alloys, and biocomposites.
"Seeding steel frames brings destroyed coral reefs back to life" (2024) https://news.ycombinator.com/item?id=39735205
[deleted]
Seven basic rules for causal inference
At the bottom, the author mentions that by "correlation" they don't mean "linear correlation", but all their diagrams show the presence or absence of a clear linear correlation, and code examples use linear functions of random variables.
They offhandedly say that "correlation" means "association" or "mutual information", so why not just do the whole post in terms of mutual information? I think the main issue with that is just that some of these points become tautologies -- e.g. the first point, "independent variables have zero mutual information" ends up being just one implication of the definition of mutual information.
This isnt a correction to your post, but a clarification for other readers: correlation implies dependence, but dependence does not imply correlation. Conversely, two variables share non-zero mutual information if and only if they are dependent.
By that measure, all of these Spurious Correlations indicate insignificant dependence, which isn't of utility: https://www.tylervigen.com/spurious-correlations
Isn't it possible to contrive an example where a test of pairwise dependence causes the statistician to error by excluding relevant variables from tests of more complex relations?
Trying to remember which of these factor both P(A|B) and P(B|A) into the test
I think you're using the word "insignificant" in a possibly misleading or confusing way.
I think in this context, the issue with the spurious correlations from that site is that they're all time series for overlapping periods. Of course, the people who collected these understood that time was an important causal factor in all these phenomena. In the graphical language of this post:
T --> X_i
T --> X_j
Since T is a common cause to both, we should expect to see a mutual information between X_i, X_j. In the paradigm here, we could try to control for T and see if a relationship persists (i.e. perhaps in the same month, collect observations for X_i, X_j in each of a large number of locales), and get a signal on whether some the shared dependence on time is the only link.
If a test of dependence shows no significant results, that's not conclusive because of complex, nonlinear, and quantum 'functions'.
How are effect lag and lead expressed in said notation for expressing causal charts?
Should we always assume that t is a monotonically-increasing series, or is it just how we typically sample observations? Can traditional causal inference describe time crystals?
What is the quantum logical statistical analog of mutual information?
Are there pathological cases where mutual information and quantum information will not discover a relationship?
Does Quantum Mutual Information account for Quantum Discord if it only uses von Neumann definition of entropy?
Launch HN: MinusX (YC S24) – AI assistant for data tools like Jupyter/Metabase
Hey HN! We're Vivek, Sreejith and Arpit, and we're building MinusX (https://minusx.ai), a data science assistant for Jupyter and Metabase. MinusX is a Chrome extension (https://minusx.ai/chrome-extension) that adds an AI sidechat to your analytics apps. Given an instruction, our agent operates your app (by clicking and typing, just like you would) to analyze data and answer queries. Broadly, you can do 3 types of things: ask for hypotheses and explore data, extend existing notebooks/dashboards, or select a region and ask questions. There's a simple video walkthrough here: https://www.youtube.com/watch?v=BbHPyX2lJGI. The core idea is to "upgrade" existing tools, where people already do most of their data work, rather than building a new platform.
I (Vivek) spent 6 years working in various parts of the data stack, from data analysis at a 1000+ person ride hailing company to research at comma.ai, where I also handled most of the metrics and dashboarding infrastructure. The problems with data, surprisingly, were pretty much the same. Developers and product managers just want answers, or want to set up a quick view of some metric they care about. They often don't know which table contains what information, or what specific secret filters need to be kept in mind to get clean data. At large companies, analysts/scientists take care of most of these requests over a thousand back-and-forths. In small companies, most data projects end up being one-off efforts, and many die midway.
I've tried every new shiny analytics app out there and none of them fully solve this core issue. New tools also come with a massive cost: you have to convince everyone around you to move, change all your workflows and hope the new tool has all features your trusty old one did. Most people currently go to ChatGPT with barely any real background context, and admonish the model till it sputters some useful code, SQL or hypothesis.This is the kind of user we're trying to help.
The philosophy of MinusX mirrors that of comma. Just as comma is working on "an AI upgrade for your car", we want to retrofit analytics software with abilities that LLMs have begun to unlock. We also get a kick out of the fact that we use the same APIs humans use (clicking and typing), so we don't really need "permission" from any analytics app (just like comma.ai does not need permission from Mr Toyota Corolla) :)
How it works: Given an instruction, the MinusX chrome extension first constructs a simplified representation of the host application's state using the DOM, and a bunch of application specific cues. We also have a set of (currently) predefined actions (eg: clicking and typing) that the agent can use to interact with the host application. Any "complex action" can be described as a combination of these action-primitives. We send this entire context, the instruction and the actions to an LLM. The LLM responds with a sequence of actions which are executed and the revised state is computed and sent back to the LLM. This loop terminates when the LLM evaluates that the desired goals are met. Our architecture allows users to extend the capabilities of the agent by specifying new actions as combinations of the action-primitives. We're working on enabling users to do this through the extension itself.
"Retrofitting" is a weird concept for software, and we've found that it takes a while for people to grasp what this actually implies. We think, with AI, it will be more of a thing. Most software we use will be "upgraded" and not always by the people making the original software.
We ourselves are focused on data analytics because we've worked in and around data science / data analysis / data engineering all our careers - working at startups, Google, Meta, etc - and understand it decently well. But since "retrofitting" can be just as useful for a bunch of other field-specific software, we're going to open-source the entire extension and associated plumbing in the near future.
Also, let's be real - a sequence of function calls rammed through a decision tree does not make any for-loop "agentic". The reality is that a large amount of in-the-loop data needed for tasks such as ours does not exist yet! Getting this data flywheel running is a very exciting axis as well.
The product is currently free to use. In the future, we'll probably charge a monthly subscription fee, and support local models / bring-your-own-keys. But we're still working that out.
We'd be super stoked for you to try out MinusX! You can find the extension here: https://minusx.ai/chrome-extension. We've also created a playground with data, for both Jupyter and Metabase, so once the extension is installed you can take it for a spin: https://minusx.ai/playground
We'd love to hear what you think about the idea, and anything else you'd like to share! Suggestions on which tools to support next are most welcome :)
XAI! Explainable AI: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Use case: Evidence-based policy; impact: https://en.wikipedia.org/wiki/Evidence-based_policy
Test case: "Find leading economic indicators like bond yield curve from discoverable datasets, and cache retrieved data like or with pandas-datareader"
Use case: Teach Applied ML, NNs, XAI: Explainable AI, and first ethics
Tools with integration opportunities:
Google Model Explorer: https://github.com/google-ai-edge/model-explorer
Yellowbrick ML; teaches ML concepts with Visualizers for humans working with scikit-learn, which can be used to ensemble LLMs and other NNs because of its Estimator interfaces : https://www.scikit-yb.org/en/latest/
Manim, ManimML, Blender, panda3d, unreal: "Explain this in 3d, with an interactive game"
Khanmigo; "Explain this to me with exercises"
"And Calculate cost of computation, and Identify relatively sustainable lower-cost methods for these computations"
"Identify where this process, these tools, and experts picking algos, hyperparameters, and parameters has introduced biases into the analysis, given input from additional agents"
Uv 0.3 – Unified Python packaging
I love the idea of Rye/uv/PyBi on managing the interpreter, but I get queasy that they are not official Python builds. Probably no issue, but it only takes one subtle bug to ruin my day.
Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.
I'm from Nebraska. Unfortunately if your Python is compiled in a datacenter in Iowa, it's more likely that it was powered with wind energy. Claim: Iowa has better Clean Energy PPAs for datacenters than Nebraska (mostly due to rational wind energy subsidies).
Anyways, software supply chain security and Python & package build signing and then containers and signing them too
Conda-forge's builds are probably faster than the official CPython builds. conda-forge/python-feedstock//recipe/meta.yml: https://github.com/conda-forge/python-feedstock/blob/main/re...
Conda-forge also has OpenBLAS, blos, accelerate, netlib, and Intel MKL; conda-forge docs > switching BLAS implementation: https://conda-forge.org/docs/maintainer/knowledge_base/#swit...
From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :
> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
> We will show how to use this in practice with `rattler-build`
> For GPUs, conda-forge has supported different CUDA levels for a long time, and we'll look at how that is used as well.
> Lastly, we also take a look at PyPI. There are ongoing discussions on how to improve support for wheels with CUDA support. We are going to discuss how the (pre-)PEP works and synergy possibilities of rattler-build and cibuildwheel
Linux distros build and sign Python and python3-* packages with GPG keys or similar, and then the package manager optionally checks the per-repo keys for each downloaded package. Packages should include a manifest of files to be installed, with per-file checksums. Package manifests and/or the package containing the manifest should be signed (so that tools like debsums and rpm --verify can detect disk-resident executable, script, data asset, and configuration file changes)
virtualenvs can be mounted as a volume at build time with -v with some container image builders, or copied into a container image with the ADD or COPY instructions in a Containerfile. What is added to the virtualenv should have a signature and a version.
ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
> Fetch a container image and verify that the container image is signed according to the policy set in /etc/containers/policy.json (see containers-policy.json(5)).So, when you sign a container full of packages, you should check the package signatures; and verify that all package dependencies are identified by the SBOM tool you plan to use to keep dependencies upgraded when there are security upgrades.
e.g. Dependabot - if working - will regularly run and send a pull request when it detects that the version strings in e.g. a requirements.txt or environment.yml file are out of date and need to be changed because of reported security vulnerabilities in ossf/osv-schema format.
Is there already a way to, as a developer, sign Python packages built with cibuildwheel with Twine and TUF or sigstore to be https://SLSA.dev/ compliant?
Show HN: PgQueuer – Transform PostgreSQL into a Job Queue
PgQueuer is a minimalist, high-performance job queue library for Python, leveraging the robustness of PostgreSQL. Designed for simplicity and efficiency, PgQueuer uses PostgreSQL's LISTEN/NOTIFY to manage job queues effortlessly.
Does the celery SQLAlchemy broker support PostgreSQL's LISTEN/NOTIFY features?
Similar support in SQLite would simplify testing applications built with celery.
How to add table event messages to SQLite so that the SQLite broker has the same features as AMQP? Could a vtable facade send messages on tablet events?
Are there sqlite Triggers?
Celery > Backends and Brokers: https://docs.celeryq.dev/en/stable/getting-started/backends-...
/? sqlalchemy listen notify: https://www.google.com/search?q=sqlalchemy+listen+notify :
asyncpg.Connection.add_listener
sqlalchemy.event.listen, @listen_for
psychopg2 conn.poll(), while connection.notifies
psychopg2 > docs > advanced > Advanced notifications: https://www.psycopg.org/docs/advanced.html#asynchronous-noti...
PgQueuer.db, PgQueuer.listeners.add_listener; asyncpg add_listener: https://github.com/janbjorge/PgQueuer/blob/main/src/PgQueuer...
asyncpg/tests/test_listeners.py: https://github.com/MagicStack/asyncpg/blob/master/tests/test...
/? sqlite LISTEN NOTIFY: https://www.google.com/search?q=sqlite+listen+notify
sqlite3 update_hook: https://www.sqlite.org/c3ref/update_hook.html
Can Large Language Models Understand Symbolic Graphics Programs?
What an awful paper title, saying "Symbolic Graphics Programs" when they just mean "vector graphics". I don't understand why they can not just use the established term instead. Also, there is no "program" here, in the same way that coding HTML is not programming, as vector graphics are not supposed to be Turing complete. And where they pulled the "symbolic" from is completely beyond me.
I'm more curious how they think LLM's can imagine things:
> To understand symbolic programs, LLMs may need to possess the ability to imagine how the corresponding graphics content would look without directly accessing the rendered visual content
To my understanding, LLMs are predictive engines based upon their tokens and embeddings without any ability to "imagine" things.
As such, an LLM might be able to tell you that the following SVG is a black circle because it is in Mozilla documentation[0]:
<svg viewBox="0 0 100 100" xmlns="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg">
<circle cx="50" cy="50" r="50" />
</svg>
However, I highly doubt any LLM could tell you the following is a "Hidden Mickey" or "Mickey Mouse Head Silhouette": <svg viewBox="0 0 175 175" xmlns="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg">
<circle cx="100" cy="100" r="50" />
<circle cx="50" cy="50" r="40" />
<circle cx="150" cy="50" r="40" />
</svg>
- [0] https://developer.mozilla.org/en-US/docs/Web/SVG/Element/cir...If teh LLM saves the SVG vector graphic to a raster image like a PNG and prompts with that instead, it will have no trouble labeling what's depicted in the SVG.
So, the task is "describe what an SVG depicts without saving it to a raster image and prompting with that"?
I believe this is exactly what the modern multimodal models are doing -- they are not strictly Large Language Models any more.
Which part is the query preprocessor?
/? LLM stack app architecture https://www.google.com/search?q=LLM+stack+app+architecture&t...
https://cobusgreyling.medium.com/emerging-large-language-mod... ; Flexibility / Complexity,
Using a list to manage executive function
todo.txt is a lightweight text format that a number of apps support: http://todotxt.org/ . From http://todotxt.org/todo.txt :
(priority) task text +project @context
(A) Call Mom @Phone +Family
(A) Schedule annual checkup +Health
(B) Outline chapter 5 +Novel @Computer
(C) Add cover sheets @Office +TPSReports
x Download Todo.txt mobile app @Phone
From "What does my engineering manager do all day?" (2021) https://news.ycombinator.com/item?id=28680961 :> - [ ] Create a workflow document with URLs and Text Templates
> - [ ] Create a daily running document with my 3 questions and headings and indented markdown checkbox lists; possibly also with todotxt/todo.txt / TaskWarrior & BugWarrior -style lineitem markup.
3 questions: Since, Before, Obstacles: What have I done since the last time we met? What will I do before the next time we meet? What obstacles are blocking my progress?
## yyyy-mm-dd
### @teammembername
#### Since
#### Before
#### Obstacles
A TaskWarrior demo from which I wrote https://gist.github.com/westurner/dacddb317bb99cfd8b19a3407d... : $ task help
task; task list; # 0 tasks.
task add "Read the source due:today priority:H project:projectA"
task add Read the docs later due:tomorrow priority:M project:projectA
task add project:one task abc
# "Created task 3."
task add project:one task def +someday depends:3
task
task context define projectA "project:projectA or +urgent"
task context projectA
task add task ghi
task add task jkl +someday depends:3
task # lists just tasks for the current context:"project:projectA"
task next
task 1 done
task next
task 2 start
task 2 done
task context none
task delete
TaskWarrior Best Practices: https://taskwarrior.org/docs/best-practices/The TaskWarrior +WAITING virtual label is the new way instead of status:waiting according to the changelog.
Every once in awhile, I remember that I have a wiki/workflow document with a list of labels for systems like these: https://westurner.github.io/wiki/workflow#labels :
GTD says have labels like -next, -waiting, and -someday.
GTD also incorporates the 4 D's of Time Management; the Eisenhower Matrix Method.
4 D's of Time Management: Do, Defer/Schedule, Delegate, Delete
Time management > The Eisenhower Method: https://en.wikipedia.org/wiki/Time_management#The_Eisenhower...
Hipster PDA: https://en.wikipedia.org/wiki/Hipster_PDA
"Use a work journal" (2024-07; 1083 pts) https://news.ycombinator.com/item?id=40950584
U.S. Inflation Rate 1960-2024
Keep in mind the things tracked in the Consumer Price Index are not static. That is, over time, there is a drift towards comparing apples to oranges. For example, if beef becomes too expensive so consumers switch to pork eventually then beef is removed and pork is tracked. That's not less inflation. That's bait and switch.
Futhermore, year by year inflation is too out of context. Instead for a given year we should also get the compounded rate for the previous 5 yrs, 10 yrs, and 15 yrs. That level of transparency would be significant.
CPI: Consumer Price Index; "inflation": https://en.wikipedia.org/wiki/Consumer_price_index
US BLS: Bureau of Labor Statistics > Consumer Price Indexes Overview: https://www.bls.gov/cpi/overview.htm :
> Price indexes are available for the U.S., the four Census regions, nine Census divisions, two size of city classes, eight cross-classifications of regions and size-classes, and for 23 local areas. Indexes are available for major groups of consumer expenditures (food and beverages, housing, apparel, transportation, medical care, recreation, education and communications, and other goods and services), for items within each group, and for special categories, such as services.
- FWIU food prices never decreased after COVID-19.
https://www.google.com/search?q=FWIU+food+prices+never+decre.... :
"Consumer Price Index for All Urban Consumers: Food and Beverages in U.S. City Average (CPIFABSL)" https://fred.stlouisfed.org/series/CPIFABSL
"Consumer Price Index for All Urban Consumers: All Items Less Food and Energy in U.S. City Average (CPILFESL)" https://fred.stlouisfed.org/series/CPILFESL
> The "Consumer Price Index for All Urban Consumers: All Items Less Food & Energy" is an aggregate of prices paid by urban consumers for a typical basket of goods, excluding food and energy. This measurement, known as "Core CPI," is widely used by economists because food and energy have very volatile prices.
- COVID eviction moratoriums ended in 2021. How did that affect CPI rent/lease/mortgage, and new housing starts now that lumber prices have returned to normal? https://en.wikipedia.org/wiki/COVID-19_eviction_moratoriums_...
"Consumer price index (CPI) for rent of primary residence compared to CPI for all items in the United States from 2000 to 2023" https://www.statista.com/statistics/1440254/cpi-rent-primary...
- pandas-datareader > FRED, caching queries: https://pandas-datareader.readthedocs.io/en/latest/remote_da...
> This measurement, known as "Core CPI," is widely used by economists because food and energy have very volatile prices.
Translation: The Core Consumer Price Index has little to do with actual citizens, and instead reflects some theoretical group of people who don't eat and don't go anywhere.
In addition, politicians recite the CCPI and The Media parrot them. Both knowing - or should know - it's deceptive and doesn't actually reflect reality.
What could go wrong?
What metrics do you suggest for the purpose?
Is tone helpful?
It's not the metrics per se, it's the disconnect between what they actually represent and how they're used to misrepresent the current economic feeliings of the masses.
Take the last few months in the USA, we've been assured by the party in power that inflation is under control, etc. Yet, *no one* who goes to the super market on weekly basis has seen that and believes that. Eventually, such gaslighting leads to mistrust and pushback.
The smart thing would be to include anything practical consumers MUST purchase. To exclude food and gas might be great for academics, but for the everyone else is smells of BS.
How should the free hand of the market affect post-COVID global food prices?
Tariff spats (and firms' resultant inability to compete at global price points) or free market trade globalization with nobody else deciding the price for the contracting [fair trade] parties.
There are multiple lines that can be plotted on a chart: Core CPI, CPI Food &|| Gas, GDP, yield curve, M1 and M2, R&D focii and industry efficiency metrics
You're off topic. The discussion is about CCPI and how it's misleading and abused. That is, how it's used as a propaganda tool.
I'm not trying to explain post Covid inflation, the free market, etc.
How should they get food price inflation under control?
What other metrics for economic health do you suggest?
Well, you certainly don't get it under control by removing it from the key metric. You don't get it under control by pretending it hasn't been removed and then gaslighting the public by saying everything is ok, that is under control.
What can they do here?
"Harris’ plan to stop [food] price gouging could create more problems than it solves" (2024) https://www.cnn.com/2024/08/16/business/harris-price-gouging... :
> Food prices have surged by more than 20% under the Biden-Harris administration, leaving many voters eager to stretch their dollars further at the grocery store.
> On Friday, Vice President Kamala Harris said she has a solution: a federal ban on price gouging across the food industry.
They still haven't undone the prior administration's tariff wars of recent yore fwiu. What happened to everyone loves globalization and Free Trade and Fair Trade? Are you down with TPP?
But did prices reflect the value of the workers and their product in the first place.
There were compost shortages during COVID.
There are still fertilizer shortages FWIU. TIL about not Ed Koch trying to buy Iowa's basically state-sponsored fertilizer business which was created in direct response to the fertilizer market conditions created in significant part by said parties.
Humanoid robots in agriculture, with agrivoltaics, and sustainable no-till farming practices; how much boost in efficiency can we all watch and hope for?
What Is a Knowledge Graph?
Good article on the high level concepts of a knowledge graph, but some concerning mischaracterizations of core functions of ontologies supporting the class schema and continued disparaging of competing standards-based (RDF triple-store) solutions. That the author omits the updates for property annotations using RDF* is probably not an accident and glosses over the issues with their proprietary clunky query language.
While knowledge graphs are useful in many ways, personally I wouldn't use Neo4J to build a knowledge graph as it doesn't really play to any of their strengths.
Also, I would rather stab myself with a fork than try to use Cypher to query a concept graph when better standards-based options are available.
I enjoy cypher, it's like you draw ASCII art to describe the path you want to match on and it gives you what you want. I was under the impression that with things like openCypher that cypher was becoming (if not was already) the main standard for interacting with a graph database (but I could be out of date). What are the better standards-based options you're referring to?
W3C SPARQL, SPARUL is now SPARQL Update 1.1, SPARQL-star, GQL
GraphQL is a JSON HTTP API schema (2015): https://en.wikipedia.org/wiki/GraphQL
GQL (2024): https://en.wikipedia.org/wiki/Graph_Query_Language
W3C RDF-star and SPARQL-star (2023 editors' draft): https://w3c.github.io/rdf-star/cg-spec/editors_draft.html
SPARQL/Update implementations: https://en.wikipedia.org/wiki/SPARUL#SPARQL/Update_implement...
/? graphql sparql [ cypher gremlin ] site:github.com inurl:awesome https://www.google.com/search?q=graphql+sparql++site%253Agit...
But then data validation everywhere; so for language-portable JSON-LD RDF validation there are many implementations of JSON Schema for fixed-shape JSON-LD messages, there's W3C SHACL Shapes and Constraints Language, and json-ld-schema is (JSON Schema + SHACL)
/? hnlog SHACL, inference, reasoning; https://news.ycombinator.com/item?id=38526588 https://westurner.github.io/hnlog/#comment-38526588
> free copy of the O’Reilly book "Building Knowledge Graphs: A Practitioner’s Guide"
Knowledge Graph (disambiguation) https://en.wikipedia.org/wiki/Knowledge_Graph_(disambiguatio...
Knowledge graph: https://en.wikipedia.org/wiki/Knowledge_graph :
> In knowledge representation and reasoning, a knowledge graph is a knowledge base that uses a graph-structured data model or topology to represent and operate on data. Knowledge graphs are often used to store interlinked descriptions of entities – objects, events, situations or abstract concepts – while also encoding the free-form semantics or relationships underlying these entities. [1][2]
> Since the development of the Semantic Web, knowledge graphs have often been associated with linked open data projects, focusing on the connections between concepts and entities. [3][4] They are also historically associated with and used by search engines such as Google, Bing, Yext and Yahoo; knowledge-engines and question-answering services such as WolframAlpha, Apple's Siri, and Amazon Alexa; and social networks
Ideally, a Knowledge Graph - starting with maybe a "personal knowledge base" in a text document format that can be rendered to HTML with templates - can be linked with other data about things with correlate-able names; ideally you can JOIN a knowledge graph with other graphs as you can if the Node and Edge Relations with Schema and URIs make it possible to JOIN.
A knowledge graph is a collection of nodes and edges (or nodes and edge nodes) with schema so that it is query-able and JOIN-able with.
A Named Graph URI may be the graphid ?g of an RDF statement in a quadstore:
?g ?s ?p ?o // ?o_datatype ?o_lang
MiniBox, ultra small busybox without uncommon options
Would this compile to WASM for use as a terminal in JupyterLite? Though, busybox has the ash shell instead of bash. https://github.com/jupyterlite/jupyterlite/issues/949
It could with a few tweaks, but it lacks features when compared to bash and busybox ash. For example, it doesn't support shell scripting, configuration file initialization, etc. I wouldn't recommended it right now though since it doesn't have most features of a standard shell but if you want to test it out, give it a try.
RustyBox is a c2rust port/rewrite of BusyBox to Rust; though it doesn't look like anyone has contributed fixes to the unsafe parts in awhile.
So, no SysV /etc/init.d because no shell scripting, and no systemd because embedded environment resource constraints? There must be a process to init and respawn processes on events like boot (and reboot, if there's a writeable filesystem)
Yes, you are right, I am trying to create a smaller version of sysvinit compatible with busybox's init. There is still no shell scripting yet, but I do promise to make the shell fully compatible with other shells with shell scripting in a future version.
What happened to RustyBox? c2rust is way easier than doing it manually, then why is it abandoned?
Same question about rustybox. Maybe they're helping with cosmos-de or coreutils or something.
OpenWRT has procd instead of systemd, but it does source a library of shell functions and run [somewhat haphazard] /etc/init.d scripts instead of parsing systemd unit configuration files to eliminate shell scripting errors when spawning processes as root.
https://westurner.github.io/hnlog/ Ctrl-F procd, busd
(This re: bootloaders, multiple images, and boot integrity verification: https://news.ycombinator.com/item?id=41022352 )
Systemd is great for many applications.
Alternatives to systemd: https://without-systemd.org/wiki/index_php/Alternatives_to_s...
There are advantages to systemd unit files instead of scripts. Porting packages between distros is less work with unit files. Systemd respawns processes consistently and with standard retry/backoff functionality. Systemd+journals produces indexable logs with consistent datetimes.
There's a pypi:SystemdUnitParser.
docker-systemctl-replacement > systemctl3.py parses and schedules processes defined in systemd unit files: https://github.com/gdraheim/docker-systemctl-replacement/blo...
Architectural Retrospectives: The Key to Getting Better at Architecting
Architecture retrospectives sound like a good idea but don't seem to be "the key" because in practice they fail too often.
Retrospectives look backwards, and most teams aren't wired that way. And unlike sprint retrospectives, architecture retrospectives turn up issues that are unrealistic/impossible to adjust in real time.
Instead, try lightweight architectural decision records, during the selection process, at the start, when things are easiest to change, and can be the most flexible.
When teams try ADRs in practice, teamwork goes up, choices get better, and implementations get more realistic. And if someone later on wants to do a retrospective, then the key predictive information is right there in the ADR.
https://github.com/joelparkerhenderson/architecture-decision...
Is there already a good way to link an ADR Architectural Decision Record with Threat Modeling primitives and considerations?
"Because component_a doesn't support OAuth", "Because component_b doesn't supported signed cookies"
Threat Model: https://en.wikipedia.org/wiki/Threat_model
GH topic: threat-modeling: https://github.com/topics/threat-modeling
Real and hypothetical architectural security issues can be linked to CWE Common Weakness Enumeration URLs.
SBOM tools help to inventory components and versions in an existing architecture and to JOIN with vuln databases that publish in OSV OpenSSF Vulnerability Format, which is useful for CloudFuzz, too.
Good question. Yes there are a variety of ways that help.
1. If the team favors lightweight markdown, then add a markdown section for Threat Modeling and markdown links to references. Some teams favor doing this for more kinds of analysis, such as business analysis (e.g. SWOT) and environment analysis (e.g. PESTLE) and risk analysis (e.g. RAID).
2. If the team favors metrics systems, then consider trying a Jupyter notebook. I haven't personally tried this. Teams anecdotally tell me these can be excellent for showing probable effects.
3. If the team is required to use existing tooling such as tracking systems, such as for auditing and compliance, then consider writing the ADR within the tracking system and linking to it internally.
> 1.
Add clickable URL links to the reference material for whichever types of analyses.
> 2. Jupyter
Notebooks often omit test assertions that would be expected of a Python module with an associated tests/ directory.
With e.g. ipytest, you can run specific unit tests in notebook input cells (instead of testing the whole notebook).
There are various ways to template and/or parametrize notebooks; copy the template.ipynb and include the date/time in the filename__2024-01-01.ipynb, copy a template git repo and modify, jupyterlab_templates, papermill
> 3. [...] consider writing the ADR within the tracking system and linking to it internally
+1. A "meta issue" or an epic includes multiple other issues.
If you reference an issue as a list item in GFM GitHub-Flavored Markdown, GitHub will auto-embed the current issue title and closed/open status:
- https://github.com/python/cpython/issues/1
- python/cpython#1
This without the leading `- ` doesn't get the card embed though: https://github.com/python/cpython/issues/1
Whereas this
form with hand-copied issue titles, non-obfuscated URLs you don't need to hover over to read, and - [ ] markdown checkboxes works with all issue trackers: - [ ] "Issue title" https://github.com/python/cpython/issues/1
- [x] "Issue title" https://github.com/python/cpython/issues/2
ARPA-H announces awards to develop novel technologies for precise tumor removal
This is the sort of thing I'd love to see NIH have many parallel research tracks on this stuff, given backing & support to make the research happen & to release the work.
Precision medicine has so much potential, feels so in line for some really wild breakthroughs, with its ability to make so many exact & small operations.
I wonder where the Obama Precision Medicine (2015) work has gotten to so far. https://obamawhitehouse.archives.gov/the-press-office/2015/0...
The biggest Obama spend was to create a research cohort (https://allofus.nih.gov/). So far, it hasn't paid dividends in an appreciable way, but the UK Biobank (on which the US program is partially modeled) started in 2006 and is now contributing immensely to the development of medicine.
The US program has the potential to be even more valuable if managed well, but I haven't seen overwhelming indications of reaching that potential yet; however, I think a few more years are needed for a clear evaluation.
Sadly the All of Us program (of which I am a research subject [and researcher]) hasn't done any sort of imaging or ECG. They did draw blood, so that allowed for genome sequencing. That may also, in principle, allow for assaying new biomarkers (I don't believe anything like that has been funded though).
2009-2010 Meaningful Use criteria required physicians to implement electronic care records.
There's now FHIR for sharing records between different vendors' EHR systems. There's a JSONLD representation of FHIR.
Which health and exercise apps can generate FHIR for self-reporting?
Can participants forward their other EHR/EMR records to the All of Us program?
Can participants or Red Cross or other blood donation services forward collected vitals and sample screening data to the All of Us program?
The imaging reports are being imported, but not the images.
The bigger issue is that clinical imaging is done due to medical indications. Medical indications dramatically confound the results. A key element of the UK Biobank's value is that imaging and ECG are being done without any clinical indication.
So they need more data from healthy patients when they're healthy in order to determine what's anomalous?
SIEM tools do anomaly detection with lots of textual log data and sensor data.
What would a Cost-effectiveness analysis say about collecting data from healthy patients: https://en.wikipedia.org/wiki/Cost-effectiveness_analysis
TIL the ROC curve applies to medical tests.
Photon Entanglement Drives Brain Function
Traditional belief: photons do not interact with photons, photons are massless according to the mass energy relation.
New findings: Photons interact as phonons in matter.
"Quantum entangled photons react to Earth's spin" (2024) https://news.ycombinator.com/item?id=40720147 :
> Actually, photons do interact with photons; as phonons in matter: "Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315 https://news.ycombinator.com/item?id=40600762
"New theory links quantum geometry to electron-phonon coupling" (2024) https://news.ycombinator.com/item?id=40663966 https://phys.org/news/2024-06-theory-links-quantum-geometry-... :
> A new study published in Nature Physics introduces a theory of electron-phonon coupling that is affected by the quantum geometry of the electronic wavefunctions
The field of nonlinear optics deals with photon-photon interactions in matter, and has been around for almost a century.
Do they model photons as rays, vectors, particles, waves, or fluids?
https://en.wikipedia.org/wiki/Nonlinear_optics :
> In nonlinear optics, the superposition principle no longer holds.[1][2][3]
But phonons are quantum waves in or through matter and the superposition principle holds with phonons AFAIU
Superposition is valid in vacuum, well, actually, until the photons have enough energy to colide and form an electron-positron pair.
It's also valid in most transparent material, again, assuming
1) each photon has no enough energy for example to extract an electron form the material like in the photoelectric effect, or creating an electron and a hole in a semiconductor, or ...
2) there are not enough photons, so you can model the effect using linear equations
And there are weird materials where the non linear effects are easy to trigger.
The conclusion is that superposition is only a nice approximation in the easy case.
I had always assumed that activity in the brain was unsynchronized and that it is that which produces the necessary randomness through race conditions. I am considering the need to find synchronization as making some sort of computer analogy which doesn't exist.
Brain activity is highly synchronized, at least according to the definition used by neuroscientists. Brain waves operate at certain hertz based on your level of arousal. https://en.wikipedia.org/wiki/Neural_oscillation
Brain waves also synchronize to other brain waves; "interbrain synchrony"
- "The Social Benefits of Getting Our Brains in Sync" (2024) https://www.quantamagazine.org/the-social-benefits-of-gettin...
But that might be just like the sea has waves?
The brain waves drive the synchronization process by triggering inhibition in neurons not in the targeted subgroup.
There are different frequency waves that operate over different timescales and distances and are typically involved in different types of functional activity.
Mermaid: Diagramming and Charting Tool
A key thing to appreciate is that both Github and Gitlab support rendering Mermaid graphs in their ReadMe's
[0] https://docs.gitlab.com/ee/user/markdown.html
[1] https://github.blog/developer-skills/github/include-diagrams...
JupyterLab supports MermaidJS: https://github.com/jupyterlab/jupyterlab/pull/14102
Looks like Colab almost does.
The official vscode MermaidJS extension probably could work in https://vscode.dev/ : https://marketplace.visualstudio.com/items?itemName=MermaidC... :
ext install MermaidChart.vscode-mermaid-chart
LLMs can generate MermaidJS in Markdown diagrams that can easily fixed given sufficient review.Additional diagramming (node graph) tools: GraphViz, Visio, GSlides, Jamboard, yEd by yworks layout algorithms, Gephi, Dia, PlantUML, ArgoUML, blockdiag, seqdiag, rackdiag, https://markmap.js.org/
JPlag – Detecting Software Plagiarism
Should a plagiarism score be considered when generating code with an infinite monkeys algorithm with selection or better?
Would that result in inability to write code in a clean room, even; because eventually all possible code strings and mutations thereof would already be patented.
For example, are three notes or chords copyrightable?
Resilient Propagation and Free-Space Skyrmions in Toroidal EM Pulses
"Observation of Resilient Propagation and Free-Space Skyrmions in Toroidal Electromagnetic Pulses" (2024) https://pubs.aip.org/apr/article/11/3/031411/3306444/Observa...
- "Electromagnetic vortex cannon could enhance communication systems" (2024) https://phys.org/news/2024-08-electromagnetic-vortex-cannon-...
Elroy Jetson, Buzz Lightyear, and Ghostbusters all project rings / vortexes.
Are toroidal pulses any more stable for fusion, or thrust and handling?
"Viewing Fast Vortex Motion in a Superconductor" (2024) https://physics.aps.org/articles/v17/117
New research on why Cahokia Mounds civilization left
I used to live near the Missouri River before it meets the Mississippi River south of STL, but haven't yet made it to the Cahokia Mounds which are northeast and across the river from what is now St. Louis, Missouri.
Was it disease?
["Fusang" to the Chinese, various names to Islanders FWIU]
[?? BC/AD: Egyptian treasure in Illinois, somehow without paddleboats to steam up the Mississippi]
~800 AD: Lead Cross of Knights Templar in Arizona, according to America Unearthed S01E10. https://www.google.com/search?q=%7E800+AD%3A+Templar+Cross%2... ; a more recent dating of Tucson artifacts: https://en.wikipedia.org/wiki/Tucson_artifacts
~1000 AD: Leif Erickson, L'Anse aux Meadows; Discovering Vinland: https://en.m.wikipedia.org/wiki/Leif_Erikson#Discovering_Vin...
And then the Story of Erik the Red, and a Skraeling girl in Europe, and Columbus; and instead we'll celebrate Juneteenth day to celebrate when news reached Galveston.
Did they plant those mounds? Did they all bring good soil or dirt to add to the mound?
May Pole traditions may be similar to "all circle around the mountain" practices in at least ancient Egyptian culture FWIU.
If there was a lot of contact there, would that have spread diseases? (Various traditions have intentionally high contact with hol y water containers on the way in, too, for example.)
FWIU there's strong evidence for Mayans and Aztecs in North America; but who were they displacing?
For one thing, the words you want are "Maya" and "Nahua". "Mayan" is the language family and "Aztec" refers to various things, none of which are what you want.
They're also definitely unrelated to the mound cultures except for the broadest possible relationships like existing on the same continent.
How are pyramid-building cultures definitely unrelated to mound-building cultures?
Cahokia Mounds: https://en.wikipedia.org/wiki/Cahokia :
> Today, the Cahokia Mounds are considered to be the largest and most complex archaeological site north of the great pre-Columbian cities in Mexico.
Chicago was a trading post further north FWIU, but not an archaeological site.
"Michoacan, Michigan, Mishigami, Mizugami: Etymological Origins? A Legend." https://christopherbrianoconnor.medium.com/michoacan-michiga...
Is there evidence of hydrological engineering or stonework?
It's not clear whether the megalithic Sage Wall in MT was man-made, and sort of looks like the northern glacier pass it may have marked.
FWIU there are quarry sites in the southwest that predate most timelines of stonework in the Americas and in Egypt, Sri Lanka / Indonesia, and East Asia; but they're not further north than Cahokia Mounds.
In TN, There are many Clovis sites; but they decided to flood the valley that was home to Sequoyah - who gave written form to Cherokee and other native languages - and also a 9500-year old archaeological site.
This says the Clovis people of Clovis, New Mexico are the oldest around: https://tennesseeencyclopedia.net/entries/paleoindians-in-te...
The Olmecs, Aztecs, and Mayans all worked stone.
From where did stonework like the Osireon originate?
> How are pyramid-building cultures definitely unrelated to mound-building cultures?
You're asserting a causal relationship, not a functional or morphological similarity. You haven't made an argument for that yet.
> This says the Clovis people of Clovis, New Mexico are the oldest around
They aren't. Pre-clovis is well established at this point. I'm not sure you intend this with your phrasing, but maybe this will be useful. Archaeological cultures like Clovis don't name a "people" because the actual humans or may not have shared unified identities. Type sites also just represent a clear example of the broader category they're naming rather than anything about where things originated.
There's also thousands and thousands of years separation between Clovis Mexica, Mound cultures, or the Maya, so unless you make an argument I'm not sure what you're trying to say here. Do you think all lithics are the same?
> The Olmecs, Aztecs, and Mayans all worked stone.
1) you've made the same naming mistake again and 2) this isn't an argument for anything.
Queues invert control flow but require flow control
I always wish more metaphors were built using conveyor belts in these discussions. It helps me mentally underscore that you have to pay attention to what queue/belt you load and why you need to give that a lot of thought.
Granted, I'm probably mostly afraid of diving into factorio again. :D
The Spintronics mechanical circuits game is sort of like conveyor belts.
Electrons are not individually identified like things on a conveyor belt.
Electrons in conductors, semiconductors, and superconductors do behave like fluids.
Turing tapes; https://hackaday.com/2016/08/18/the-turing-tapes/
Theory of computation > Models of computation: https://en.wikipedia.org/wiki/Theory_of_computation
Reservoir of liquid water found deep in Martian rocks
Article: "Liquid water in the Martian mid-crust" (2024) https://www.pnas.org/doi/10.1073/pnas.2409983121
- "Mars may host oceans’ worth of water deep underground" [according to an analysis of seismic data] https://www.planetary.org/articles/mars-may-host-oceans-wort... :
> Now, a team of scientists has used Marsquakes — measured by NASA’s InSight lander years ago — to see what lies beneath. Since the way a Marsquake travels depends on the rock it’s passing through, the researchers could back out what Mars’ crust looks like from seismic measurements. They found that the mid-crust, about 10-20 kilometers (6-12 miles) down, may be riddled with cracks and pores filled with water. A rough estimate predicts these cracks could hold enough water to cover all of Mars with an ocean 1-2 kilometers (0.6-1.2 miles) deep
> [...] This reservoir could have percolated down through nooks and crannies billions of years ago, only stopping at huge depths where the pressure would seal off any cracks. The same process happens on our planet — but unlike Mars, Earth’s plate tectonics cycles this water back up to the surface
> [...] “It would be very challenging,” Wright said. Only a few projects have ever bored so deep into Earth’s crust, and each one was an intensive undertaking. Replicating that effort on another planet would take lots of infrastructure, Wright goes on, and lots of water.
How much water does drilling take on Earth?
Water is used to displace debris and to carry it up to the surface.
A cylinder of 30cm diameter and 10km deep would hold around 700k litres.
1 US gal = 3.785 litres
700k litres = 0.184920437 million US gallons
"How much water does the typical hydraulically fractured well require?" https://www.usgs.gov/faqs/how-much-water-does-typical-hydrau... :
> Water use per well can be anywhere from about 1.5 million gallons to about 16 million gallons
https://www.epa.gov/watersense/statistics-and-facts :
> Each American uses an average of 82 gallons of water a day at home (USGS, Estimated Use of Water in the United States in 2015).
So, 1m gal / 82 gal/person/day = 12,195 person/days of water.
Camping guidelines suggests 1-2 gallons of water per person per day.
1m gal / 2 gal/person/day = 500,000 person/days of water
The 2016 Mars NatGeo series predicts dynamics of water scarcity on Mars.
Water on Mars: https://en.wikipedia.org/wiki/Water_on_Mars
Show HN: R.py, a small subset of Python Requests
i revisited the work that i started in 2022 re: writing a subset of Requests but using the standard library: https://github.com/gabrielsroka/r
back then, i tried using urllib.request (from the standard library, not urllib3) but it lacks what Requests/urllib3 has -- connection pools and keep alive [or that's where i thought the magic was] -- so my code ran much slower.
it turns out that urllib.request uses http.client, but it closes the connection. so by using http.client directly, i can keep the connection open [that's where the real magic is]. now my code runs as fast as Requests/urllib3, but in 5 lines of code instead of 4,000-15,000+
moral of the story: RTFM over and over and over again.
"""Fetch users from the Okta API and paginate."""
import http.client
import json
import re
import urllib.parse
# Set these:
host = 'domain.okta.com'
token = 'xxx'
url = '/api/v1/users?' + urllib.parse.urlencode({'filter': 'profile.lastName eq "Doe"'})
headers = {'authorization': 'SSWS ' + token}
conn = http.client.HTTPSConnection(host)
while url:
conn.request('GET', url, headers=headers)
res = conn.getresponse()
for user in json.load(res):
print(user['id'])
links = [link for link in res.headers.get_all('link') if 'rel="next"' in link]
url = re.search('<https://[^/]+(.+)>', links[0]).group(1) if links else None
https://docs.python.org/3/library/urllib.request.htmlhttps://docs.python.org/3/library/http.client.html
IIRC somewhere on the Python mailing list archives there's an email about whether to add the HTTP redirect handling and then SSL support - code to urllib, urllib2, or create urllib or httplib.
How does performance compare to HTTPX, does it support HTTP 1.1 request pipelining, does it support HTTP/2 or HTTP/3?
Show HN: I built an animated 3D bookshelf for ebooks
The most cited authors in the Stanford Encyclopedia of Philosophy
Fascinating list that I thought yall would enjoy! If you’re not yet aware, https://plato.stanford.edu is as close to “philosophical canon” as it gets in modern American academia.
Shoutout to Gödel and Neumann taking top spots despite not really being philosophers, at least in how they’re remembered. Comparatively, I’m honestly shocked that neither Bohr nor Heisenberg made the cut, even though there’s multiple articles on quantum physics… Turing also managed to sneak in under the wire, with 33 citations.
The bias inherent in the source is discussed in detail, and I would also love to hear HN ideas on how to improve this project, and how to visualize the results! I’m not the author, but this is right up my alley to say the least, and I’d love to take a crack at it.
From "Show HN: WhatTheDuck – open-source, in-browser SQL on CSV files" https://news.ycombinator.com/item?id=39836220 :
> datasette-lite can load [remote] sqlite and Parquet but not yet DuckDB (?) with Pyodide in WASM, and there's also JupyterLite as a datasette plug-in: https://github.com/simonw/datasette-lite https://news.ycombinator.com/user?id=simonw https://news.ycombinator.com/from?site=simonwillison.net
JSON-LD with https://schema.org/Person records with wikipedia/dbpedia RDF URIs would make it easy to query on whichever datasets can be joined on common RDFS properties like schema: :identifier and rdfs:subPropertyOf sub-properties, https://schema.org/url, :sameAs,
Plato in RDF from dbpedia: https://dbpedia.org/page/Plato
Today there are wikipedia URLs, DOI URN URIs, shorturls in QR codes, ORCID specifically to search published :ScholarlyArticle by optional :author, and now there are W3C DIDs Decentralized Identifiers for signing, identifying, and searching of unique :Thing and skos:Concept that can be generated offline and optionally registered centrally, or centrally generated and assigned like DOIs but they're signing keys.
Given uncertainty about time intervals, plot concepts over time with charts for showing graph growth over time. Maybe philosophy skos:Concept intervals (and relations, links) from human annotations thereof and/or from LLM parsing and search snippets of Wikipedia, dbpedia RDF, wikidata RDF, and ranked Stanford Encyclopedia of Philosophy terminological occurrence frequency.
- "Datasette Enrichments: a new plugin framework for augmenting your data" (2023) by row with asyncio and optionally httpx: https://simonwillison.net/2023/Dec/1/datasette-enrichments/
Ask HN: How to Price a Product
I'm 11 years experienced software engineer. I have been showing interests in frontend design and development and my day job is around it.
I built a working concept that converts designs to interactive code. However, the tech is not usable at this stage, no documentation and looks bad, but works as expected.
I'm aware of selling services as SAAS. I'd like to sell the tech as a product containing a software, product manual, usage instructions with samples.
The target users are designers, initial feedbacks were some points to address and they are curious and interested.
Having said that it's not SAAS, not subscription based, I'd like to know how to put a price on it as a product.
The Accounting Equation values your business:
Asset Value = Equities + Liabilities
Accounting Equation: https://en.wikipedia.org/wiki/Accounting_equation : Assets = Liabilities + Contributed Capital + Revenue − Expenses − Dividends
Revenue - Expenses
Cash flow is Revenue.Cash flow: https://en.wikipedia.org/wiki/Cash_flow :
> A cash flow CF is determined by its time t, nominal amount N, currency CCY, and account A; symbolically, CF = CF(t, N, CCY, A)
Though CF(A, t, ...) or CF(t, A) may be more search-index optimal, and really A_src_n, A_dest_n [...] ILP Interledger Protocol.
Payment fees are part of the price, unless you're a charity with a donate processing costs too option.
OTOH, though there are already standard departmental accounting chart names:
price_product_a = cost_payment_fees + cost_materials + cost_labor + cost_marketing + cost_sales + cost_support + cost_errors_omissions + cost_chargebacks + cost_legal + cost_future_liabilities + [...]
A CAS Computer Algebra System like {SymPy,Sage} in a notebook can help define inequality relations to bound or limit the solution volume or hyper volume(s)And then unit test functions can assert that a hypothesized model meets criteria for success
But in Python, for example, test_ functions can't return values to the test runner, they would need to write [solution evaluation score] outputs to a store or a file to be collected with the other build artifacts and attached to a GitHub Release, for example.
Eventually, costs = {a: 0, b: 2} so that it's costs[account:str|uint64] instead of cost_account, and then costs = ERP.reports[name,date].archived_output()
Monte Carlo simulation is possible with things like PyMC (MCMC) and TIL about PyVBMC. Agent-based simulation of consumers may or may not be more expensive or the same problem. In behavioral economics, many rational consumers make informed buying decisions.
Looking at causal.app > Templates, I don't see anything that says "pricing" but instead "Revenue" which may imply models for sales, pricing, costs, cost drivers;
> Finance > Marketing-driven SaaS Revenue Model for software companies with multiple products. Its revenue growth is driven by marketing spend.
> Finance > Sales-driven SaaS Revenue > Understand sales rep productivity and forecast revenue accurately
/? site:causal.app pricing model https://www.google.com/search?q=site%3Acausal.app+pricing+mo...
- Blog: "The Pros and Cons of Common SaaS Pricing Models",
- Models: "Simple SaaS pricing calculator",
- /? hn site=causal.app > more lists a number of SaaS metrics posts: https://news.ycombinator.com/from?site=causal.app
/? startupschool pricing: https://www.google.com/search?q=startupschool+pricing
Startup School Curriculum > Ctrl-F pricing: https://www.startupschool.org/curriculum
- "Startup Business Models and Pricing | Startup School" https://youtube.com/watch?v=oWZbWzAyHAE&
/? price sensitivity analysis Wikipedia: https://www.google.com/search?q=price+sensitivity+analysis+W...
Pricing strategies > Models of pricing: https://en.wikipedia.org/wiki/Pricing_strategies#Models_of_p...
Price analysis > Key Aspects, Marketing: https://en.wikipedia.org/wiki/Price_analysis :
> In marketing, price analysis refers to the analysis of consumer response to theoretical prices assessed in survey research
/? site:github.com price sensitivity: https://www.google.com/search?q=site%3Agithub.com+price+sens... :
As a market economist looking at product pricing, given maximal optimization for price And CLV customer lifetime value, what corrective forces will pull the price back down? Macroeconomic forces, competition,
Porter's five forces analysis: https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysi... :
> Porter's five forces include three forces from 'horizontal competition' – the threat of substitute products or services, the threat of established rivals, and the threat of new entrants – and two others from 'vertical' competition – the bargaining power of suppliers and the bargaining power of customers.
> Porter developed his five forces framework in reaction to the then-popular SWOT analysis, which he found both lacking in rigor and ad hoc.[3] Porter's five-forces framework is based on the structure–conduct–performance paradigm in industrial organizational economics. Other Porter's strategy tools include the value chain and generic competitive strategies.
Are there upsells now, later; planned opportunities to produce additional value for the longer term customer relationship?
When and how will you lower the price in response to competition and costs?
How does a pricing strategy vary if at all if a firm is Bootstrapping vs the Bank's money?
/? saas pricing: https://hn.algolia.com/?q=saas+pricing
Rivian reduced electrical wiring by 1.6 miles and 44 pounds
The electrical wiring in cars is Conway's law manifest in copper.
For those unfamiliar with Conway's law, I am arguing that how the car companies have organized themselves--and their budgets--ends up being directly reflected in the number of ECUs as well as how they're connected with each other. I imagine that by measuring the amount of excess copper, you´d have a pretty good measure for the overhead involved in the project management from the manufacturers' side.
(I previously worked for Daimler)
Would be funny to have a ‘mm Cu / FTE’ scaling law.
The ohm*meter^3 is the unit of electrical resistance.
Electrical resistivity and conductivity: https://en.wikipedia.org/wiki/Electrical_resistivity_and_con...
Is there a name for Wh/m or W/m (of Cu or C) of loss? Just % signal loss?
"Copper Mining and Vehicle Electrification" (2024) https://www.ief.org/focus/ief-reports/copper-mining-and-vehi... .. https://news.ycombinator.com/item?id=40542826
There's already conductive graphene 3d printing filament (and far less conductive graphene). Looks like 0.8ohm*cm may be the least resistive graphene filament available: https://www.google.com/search?q=graphene+3d+printer+filament...
Are there yet CNT or TWCNT Twisted Carbon Nanotube substitutes for copper wiring?
A high energy hadron collider on the Moon
Could this be made out of a large number of starships that land on the moon and have a magnet payload? Would require something on the order of 1000 starships 50 meters tall but only about 100 starships 100 meters tall. Perhaps an existing starship with a payload that extends a magnet another 50 meters higher after landing. Has Musk already thought about this? Correction - would require about 500 starships 100 meters tall. ChatGpt 4o mini is even worse than me at this line of sight math! Taller towers could be built to extend upwards out of a starship payload especially if guywires are used for stability. Magnets would require shielding from the Sun and reflections from the lunar surface similar to James Webb but far less demanding to achieve a temperature suitable for superconducting magnets. Obviously solar power with batteries to enable lunar nighttime operation. How many magnets are there in the LHC? How exactly circular or not does it need to be?
Next thought is to build it in space where there can be unlimited expansion of the ring diameter. Problem would be focusing the beam by aiming many freely floating magnets. Would require some form of electronic beam aiming that can correct for motion of the magnets. My EM theory is too rusty (50 years since I got a C in the course) to figure out what angle a proton beam can be bent by one magnet at these energies and so how many magnets would be required.
NotebookLLM is designed for this math.
First thoughts without reading the paper: there's too much noise and the moon is hollow (*), so even if you could assemble it [a particle collider] by bootstrapping with solar and thermoelectric and moon dirt and self-replicating robots, the rare earth payload cost is probably a primary cost driver barring new methods for making magnets from moon rock rare earths.
Is the moon hollow?
Lunar seismology: https://en.wikipedia.org/wiki/Lunar_seismology :
> NASA's Planetary Science Decadal Survey for 2012-2022 [12] lists a lunar geophysical network as a recommended New Frontiers mission. [...]
> NASA awarded five DALI grants in 2024, including research on ground-penetrating radar and a magnometer system for determining properties of the lunar core. [14]
Spellcheck says "magnometer" is not even a word.
But how much radiation noise is there from solar and cosmic wind on the surface of the moon - given the moon's lack of magnetosphere and partially thus also its lack of atmosphere - or shielded by how many meters of moon; underground in dormant lava tubes or vents?
> Structure of the Lunar Interior: The solid core has a radius of about 240 km and is surrounded by a much thinner liquid outer core with a thickness of about 90 km.[9] The partial melt layer sits above the liquid outer core and has a thickness of about 150 km. The mantle extends to within 45 ± 5 km of the lunar surface.
What are the costs to drill out the underground collider ring at what depth and but first to assemble the moon-drilling unit(s) from local materials just and given energy (and thus on-the-moon production systems therefore)
Presumably only the collision area needs to be underground to minimize noise. The rest of the ring can be above the surface. Building things on the moon is hard especially materials much easier to ship them from Earth using Starships or a derivative. Can start with a small ring made from a few ships and build a bigger one and finally a great circle one. Actually only need 500 or so towers 100 meters high each supporting a magnet. Starship can deliver 100 metric tons to LEO so could deliver a significant fraction of that to the lunar surface. Maybe 25 Starships performing 25 lunar missions each would be enough assuming LEO refueling and payload transfer.
Quantum Cryptography Has Everyone Scrambling
This is an article about QKD, a physical/hardware encryption technology that, to my understanding, cryptography engineers do not actually take seriously. It's not about post-quantum encryption techniques (PQC) like structured lattices.
As far as I can tell, QKD is mainly useful to tell people to separate out the traffic that is really worth snooping onto a network that is hard to snoop on. It is much harder for someone to spy on your secrets when they don't pass through traditional network devices.
Of course, it also tells a dedicated adversary with L1 access exactly where to tap your cables.
Show HN: I've spent nearly 5y on a web app that creates 3D apartments
Funny, I worked on something similar. My first job we needed to build up some fancy 3D alerting system for a client. I found a threejs based project on GitHub (I think it was called BluePrint 3D) and pitched it to my boss, it saved me screaming for help at figuring out how to build the same things in three JS with zero experience, but also saved us hundreds of hours to rebuild the same thing. It looked somewhat like this tool, though I'm sure this ones way more polished.
It too had a 2D editor for 3D, it was cool, but we were just building floorplans and displaying live data on those floor plans, so all the useful design stuff was scrapped for the most part. This looks nicely polished, good job.
It was a painful project due to the client asking for things that were just... well they were insane.
Neural radiance fields: https://en.wikipedia.org/wiki/Neural_radiance_field :
> A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020,[1] it has since gained significant attention for its potential applications in computer graphics and content creation.[2]
- https://news.ycombinator.com/item?id=38636329
- Business Concept: A b2b business idea from awhile ago expanded: "Adjustable Wire Shelving Emporium"; [business, office supply] client product mix 3D configurator with product placement upsells; from photos of a space to products that would work there. Presumably there's already dimensional calibration with a known-good dimension or two;
- ENH: the tabletop in that photo is n x m, so the rest of the objects are probably about
- ENH: the building was constructed in YYYY in Locality, so the building code there then said that the commercial framing studs should be here and the cables and wiring should be there.
My issue was moreso I had 0 experience with 3D and they wanted me to slap something together with threejs. I didn't even know JavaScript that well at the time. That is a pretty cool project though.
Show HN: BudgetFlow – Budget planning using interactive Sankey diagrams
Nice use of Sankey. Here's an ask or thought, I would like to feed this automatically with my spend data from my bank account. So I can export a csv that has time-date, entity, amount (credit/debit) - would be great if it could spit this out by category (where perhaps an llm could help with this task). I would then like flow to be automated monthly or perhaps quarterly with major deltas (per my definition) then pushed to me via alerts. So this isn't so much active budget management but passive nudging where the shape of my spend changes something I don't look at now but would like to.
If you have a CSV and are happy using LLM, then you should ask an LLM to give you python code to generate a sankey plot. It's not difficult to wrangle.
The Plaid API JSON includes inferred transaction categories.
OFX is one alternative to CSV for transaction data.
ofparse parses OFX: https://pypi.org/project/ofxparse/
W3C ILP Interledger protocol is for inter-ledger data for traditional and digital ledgers. ILP also has a message spec for transactions.
Scientists convert bacteria into efficient cellulose producers
> A new approach has been presented by the research group led by André Studart, Professor of Complex Materials at ETH Zurich, using the cellulose-producing bacterium Komagataeibacter sucrofermentans. [...]
> K. sucrofermentans naturally produces high-purity cellulose, a material that is in great demand for biomedical applications and the production of packaging material and textiles. Two properties of this type of cellulose are that it supports wound healing and prevents infections.
From "Cellulose Packaging: A Biodegradable Alternative" https://www.greencompostables.com/blog/cellulose-packaging :
> What is cellulose packaging? Cellulose packaging is made from cellulose-based materials such as paper, cardboard, or cellophane.
> Cellulose can also be used to produce a type of bioplastic that is biodegradable and more environmentally friendly than petroleum-based plastics. These bioplastics can be utilized to make food containers, bottles, cups or trays.
> Can cellulose replace plastic? With an annual growth rate of 5–10% over the past decade, cellulose is already at the forefront of replacing plastic in everyday use.
> Cellophane, especially, is set to replace plastic film packaging soon. According to a Future Market Insights report, cellulose packaging will have a compound annual growth rate of 4.9% between 2018 and 2028.
Is there already cling wrap cellophane? Silicone lids are washable, freezable, microwaveable
Open Source Farming Robot
It sill looks like the software is written by people who don't know how to care for plants. You don't spray water on leaves as shown in the video; you'll just end up with fungus infestation. You water the soil and nourish the microorganisms that facilitate nutrient absorption in roots. But, I don't see any reason the technology can't be adapted to do the right thing.
Probably?
But spraying water on leaves is not only the way water naturally gets to plants, it's often the only practical way to water crops at scale. Center-pivot irrigation has dramatically increased the amount of and reliability of arable cropland, while being dramatically less sensitive to topography and preparation than flood irrigation.
The advice to "water the soil, not the leaves" is founded in manual watering regimes in very small-scale gardening, often with crops bred to optimize for unnaturally prolific growth at the cost of susceptibility to fungal diseases, but which are still immature, exposing the soil. Or with transplanted bushes and trees where you have full access to the entire mulch bed. And it's absolutely a superior method, in those instances... but it's not like it's a hard-and-fast rule.
We can extend the technique out to mid-size market gardens with modern drip-lines, at the cost of adding to the horrific amounts of plastic being constantly UV-weathered that we see in mid-size market gardens.
Drip irrigation is kind of a thing out here in the desert...
And then there is burried drip irrigation...
Yes, but as GP said that doesn't scale. I live in an agriculture heavy community in the desert (mountain-west USA), and drip irrigation is only really used for small gardens and landscaping. Anyone with an acre or more of crops is not using drip.
I certainly agree that drip is the ideal, and when you aren't doing drip you want to minimize the standing water on leaves, but if I were designing this project I would design for scale.
But drip irrigation doesn’t scale because you would need to lay + connect + pressurize + maintain hundreds of miles of hoses. It’s high-CapEx.
A “watering robot”, meanwhile, can just do what a human gardener does to water a garden, “at scale.”
Picture a carrot harvester-alike machine — something whose main body sits on a dirt track between narrow-packed row-groups, with a gantry over the row-group supported by narrow inter-row wheels. Except instead of picker arms above the rows, this machine would have hoses hanging down between each row (or hoses running down the gantry wheels, depending on placement) with little electronic valve-boxes on the ends of the hoses, and side-facing jet nozzles on the sides of the valve boxes. The hoses stay always-fully-pressurized (from a tank + compressor attached to the main body); the valves get triggered to open at a set rate and pulse-width, to feed the right amount of water directly to the soil.
“But isn’t the ‘drip’ part of drip irrigation important?” Not really, no! (They just do it because constant passive input is lazy and predictable and lower-maintenance.) Actual rain is very bursty, so most plants (incl. crops) aren’t bothered at all by having their soil periodically drenched and then allowed to dry out again, getting almost bone dry before the next drenching. In fact, everything other than wetland crops like rice prefer this; and the dry-out cycles decrease the growth rates for things like parasitic fungi.
As a bonus, the exact same platform could perform other functions at the same time. In fact, look at it the other way around: a “watering robot” is just an extension of existing precision weeding robots (i.e. the machines designed to reduce reliance on pesticides by precision-targeting pesticide, or clipping/picking weeds, or burning/layering weeds away, or etc.) Any robot that can “get in there” at ground level between rows to do that, can also be made to water the soil while it’s down there.
Fair point, the robot could lower its nozzle to the ground and jet the water there, much like a human would, with probably not a lot of changes required. That does seem like it would be a good optimization.
Isn't it better to mist plants, especially if you can't delay watering due to full sun?
IIUC that's what big box gardening centers do; with fixed retractable hoses for misting and watering.
A robot could make and refill clay irrigation Ollas with or without microsprinkler inlets and level sensing with backscatter RF, but do Ollas scale?
Why have a moving part there at all? Could just modulate spec valves to high and low or better fixed height sprayers
FWIU newer solar weeding robots - which minimize pesticide use by direct substitution and minimize herbicide by vigilant crop monitoring - have fixed arrays instead of moving part lasers
An agricultural robot spec:
Large wheels, light frame, can right itself when terrain topology is misestimated, Tensor operations per second (TOPS), Computer Vision (OpenCV, NeRF,), modular sensor and utility mounts, Open CAD model with material density for mass centroid and ground contact outer hull rollover estimation,
There is a shift underway from ubiquitous tilling to lower and no till options. Tilling solves certain problems in a field - for a while - but causes others, and is relatively expensive. Buried lines do not coexist with tilling.
We are coming to understand a bit more about root biology and the ecosystem of topsoil and it seems like the 20th century approach may have been a highly optimized technique of using a sledgehammer to pound in a screw.
> IIUC that's what big box gardening centers do; with fixed retractable hoses for misting and watering.
Speaking from experience - they're making it up as their go along. Instructed to water the soil rather than the leaves and a certain number of seconds per diameter of pot, and set loose. The benefit of 'gentle rain' heads (high flow, low velocity, unlike misters) at close range is that they don't blow the heads off the blooming flowers they're selling, which is what happens if you use the heads designed for longer range at close range.
Is there evidence-based agricultural research for no-till farming?
No-Till Farming > Adoption across the world : https://en.wikipedia.org/wiki/No-till_farming#Adoption_acros... :
> * By 2023, farmland with strict no-tillage principles comprise roughly 30% of the cropland in the U.S.*
The new model used to score fuel lifecycle emissions is the Greenhouse Gases, Regulated Emissions and Energy use in Technologies (GREET) model: https://www.energy.gov/eere/greet :
> GREET is a tool that assesses a range of life cycle energy, emissions, and environmental impact challenges and that can be used to guide decision-making, research and development, and regulations related to transportation and the energy sector.
> * For any given energy and vehicle system, GREET can calculate:*
> - Total energy consumption (non-renewable and renewable)
> - Fossil fuel energy use (petroleum, natural gas, coal)
> - Greenhouse gas emissions
> - Air pollutant emissions
> - Water consumption
FWIU you have to plant cover crops to receive the new US ethanol / biofuel subsidies.
From "Some Groups Pan SAF Rules for Farmers Groups Criticize Cover Crop Requirement for Sustainable Aviation Fuel Modeling" (2024) https://www.dtnpf.com/agriculture/web/ag/news/business-input... :
> Some biofuel groups were encouraged the guidance would recognize climate-smart farm practices for the first time in ethanol or biodiesel's carbon intensity score. Others said the guidance hurts them because their producers have a hard time growing cover crops and carbon scoring shouldn't be limited to a few specific farming practices. Environmental groups said there isn't enough hard science to prove the benefit of those farm practices.
I have "The Living Soil Handbook" by Jessie Frost (in KY) here, and page 1 "Introduction" reads:
> 1. Disturb the soil as little as possible.
> 2. Keep the soil covered as much as possible.
> 3. Keep the soil planted as much as possible.
FWIU tilling results in oxidation and sterilization due to UV-C radiation and ozone; tilling turns soil to dirt; and dry dirt doesn't fix nitrogen or CO2 or host mycorhizzae which help plants absorb nutrients.
Bunds with no irrigation to not till over appear to win in many climates. Maybe bunds, agrivoltaics, and agricultural robots can help undo soil depletion.
"Vikings razed the forests. Can Iceland regrow them?" https://news.ycombinator.com/item?id=40361034
https://westurner.github.io/hnlog/# ctrl-f soil , no-till / notill / no till / #NoTill
Moments in Chromecast's history
Is there any substitute for chrome cast audio? I love being able to play in sync audio to the group of receivers I choose throughout the property, using any amplifier. I’m not even using the digital optical input and I love them
I think Sonos sued the heck out of Google for those, and it caused those devices to disappear for a few years. Sonos lost that case late last year though, so hopefully we'll see a resurgence?
https://www.reuters.com/legal/litigation/google-wins-repriev...
Otherwise, you can DIY it with a bunch of old devices or Raspberry Pis and https://github.com/geekuillaume/soundsync
I am fairly certain that the academic open source community had already published prior art for delay correction and volume control of speaker groups (which are obvious problems when you add multiple speakers to a system with transmission delay). IIRC there was a microsoft research blog post with a list of open source references for distributed audio from prior to 2006 for certain. (Which further invalidates the patent claims in question).
Before they locked Chromecast protocol down, it was easy to push audio from a linux pulseaudio sound server to Chromecast device(s).
The patchbay interface in soundsync looks neat. Also patch bay interfaces: BespokeSynth, HoustonPatchBay, RaySession, patchance, org.pipewire.helvum (GTK), easyeffects (GTK4 + GStreamer), https://github.com/BespokeSynth/BespokeSynth/issues/1614#iss...
pipewire handles audio and video streams. soundsync with video would be cool too.
FWIU Matter Casting is an open protocol which device vendors could implement.
Quantum state mimics gravitational waves
> Quantum particles in a spin-nematic state carry energy through waves.
> “We realised that the properties of the waves in the spin-nematic state are mathematically identical to those of gravitational waves,” says Shannon.
"Gravitational wave analogs in spin nematics and cold atoms" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.109.L... :
> Abstract: Large-scale gravitational phenomena are famously difficult to observe, making parallels in condensed matter physics a valuable resource. Here we show how spin nematic phases, found in magnets and cold atoms, can provide an analog to linearized gravity. In particular, we show that the Goldstone modes of these systems are massless spin-2 bosons, in one-to-one correspondence with quantized gravitational waves in flat spacetime. We identify a spin-1 model supporting these excitations and, using simulation, outline a procedure for their observation in a 23Na spinor condensate.
How Postgres stores data on disk – this one's a page turner
If anyone is interested in contrasting this with InnoDB (MySQL’s default engine), Jeremy Cole has an outstanding blog series [0] going into incredible detail.
Apache Arrow Columnar Format: https://arrow.apache.org/docs/format/Columnar.html :
> The Arrow columnar format includes a language-agnostic in-memory data structure specification, metadata serialization, and a protocol for serialization and generic data transport. This document is intended to provide adequate detail to create a new implementation of the columnar format without the aid of an existing implementation. We utilize Google’s Flatbuffers project for metadata serialization, so it will be necessary to refer to the project’s Flatbuffers protocol definition files while reading this document. The columnar format has some key features:
> Data adjacency for sequential access (scans)
> O(1) (constant-time) random access
> SIMD and vectorization-friendly
> Relocatable without “pointer swizzling”, allowing for true zero-copy access in shared memory
Are the major SQL file formats already SIMD optimized and zero-copy across TCP/IP?
Arrow doesn't do full or partial indexes.
Apache Arrow supports Feather and Parquet on-disk file formats. Feather is on-disk Arrow IPC, now with default LZ4 compression or optionally ZSTD.
Some databases support Parquet as the database flat file format (that a DBMS process like PostgreSQL or MySQL provides a logged, permissioned, and cached query interface with query planning to).
IIUC with Parquet it's possible both to use normal tools to offline query data tables as files on disk and also to online query tables with a persistent process with tunable parameters and optionally also centrally enforce schema and referential integrity.
From https://stackoverflow.com/questions/48083405/what-are-the-di... :
> Parquet format is designed for long-term storage, where Arrow is more intended for short term or ephemeral storage
> Parquet is more expensive to write than Feather as it features more layers of encoding and compression. Feather is unmodified raw columnar Arrow memory. We will probably add simple compression to Feather in the future.
> Due to dictionary encoding, RLE encoding, and data page compression, Parquet files will often be much smaller than Feather files
> Parquet is a standard storage format for analytics that's supported by many different systems: Spark, Hive, Impala, various AWS services, in future by BigQuery, etc. So if you are doing analytics, Parquet is a good option as a reference storage format for query by multiple systems
Those systems index Parquet. Can they also index Feather IPC, which an application might already have to journal and/or log, and checkpoint?
Edit: What are some of the DLT solutions for indexing given a consensus-controlled message spec designed for synchronization?
- cosmos/iavl: a Merkleized AVL+ tree (a balanced search tree with Merkle hashes and snapshots to prevent tampering and enable synchronization) https://github.com/cosmos/iavl/blob/master/docs/overview.md
- Google/trillion has Merkle hashed edges between rows in order in the table but is centralized
- "EVM Query Language: SQL-Like Language for Ethereum" (2024) https://news.ycombinator.com/item?id=41124567 : [...]
Enum class improvements for C++17, C++20 and C++23
Speaking of enums, what's a better serialization framework for C++ like Serde in Rust, and then what about Linked Data support and of course also form validation.
Linked Data triples have (subject, predicate, object) and quads have (graph, subject, predicate, object).
RDF has URIs for all Subjects and Predicates.
RDF Objects may be URIs or literal values like xsd:string, xsd:float64, xsd:int (32bit signed value), xsd:integer, xsd:long, xsd:time, xsd:dateTime, xsd:duration.
RDFS then defines Classes and Properties, identified by string URIs.
How best to get from an Enum with (type,attr, {range of values}) to an rdfs:range definition in a schema with a URI prefix?
Python has dataclasses which is newer than attrs, but serde also emits Python pickles
So far, most people would use whatever is provided alongside the frameworks they are already using for their application.
Since the 90's, Turbo Vision, OWL, MFC, CSet++, Tools.h++, VCL, ATL, PowerPlant, Qt, Boost, JUCE, Unreal, CryEngine,....
Maybe if reflection does indeed land into C++26, something might be more universally adopted.
"Reflection for C++26" (2024) https://news.ycombinator.com/item?id=40836594
Wt:Dbo years ago but that's just objects to and from SQL, it's not linked data schema or fast serialization with e.g. Arrow.
There should be easy URIs for Enums, which I guess map most closely to the rdfs:range of an rdfs:Property. Something declarative and/or extracted by another source parser would work. To keep scheme definitions DRY
re Serde: If you want JSON, https://github.com/beached/daw_json_link is about as close as we can get. It interop's well with the reflection(or like) libraries such as Boost Describe or Boost PFR and will work well with std reflection when it is available.
Probably not too much work to add and then also build a JSONLD @context from all of the ~ message structs.
:Thing > https://schema.org/name , :URL , :identifier and subclasses
Thing > Intangible > Enumeration: https://schema.org/Enumeration
Adding reflection will be simple and backwards compatible with existing code as it would only come into play when someone hasn't manually mapped a type. This leaves the cases where reflection doesn't work(private member variables) still workable too.
Haven't looked at JSONLD much, but it seems like it could be added but would be a library above I think. Extracting the mappings is already doable and is done in the JSON Schema export.
"Exporting JSON Schema" https://github.com/beached/daw_json_link/blob/release/docs/c... :
include <daw/json/daw_json_schema.h>
std::string daw::json::to_json_schema<MyType>( "identifier", "title" );
From https://westurner.github.io/hnlog/#comment-38526588 :> SHACL is used for expressing integrity constraints on complete data, while OWL allows inferring implicit facts from incomplete data; SHACL reasoners perform validation, while OWL reasoners do logical inference.
- "Show HN: Pg_jsonschema – A Postgres extension for JSON validation" https://news.ycombinator.com/item?id=32186878 re: json-ld-schema, which bridges JSONschema and SHACL for JSONLD
Twisted carbon nanotubes store more energy than lithium-ion batteries
> By making single-walled carbon nanotubes (SWCNTs) into ropes and twisting them like the string on an overworked yo-yo, Katsumi Kaneko, Sanjeev Kumar Ujjain and colleagues showed that they can store twice as much energy per unit mass as the best commercial lithium-ion batteries.
> SWCNTs are made from sheets of pure carbon just one atom thick that have been rolled into a straw-like tube. They are impressively tough – five times stiffer and 100 times stronger than steel – and earlier theoretical studies by team member David Tománek and others suggested that twisting them could be a viable means of storing large amounts of energy in a compact, lightweight system.
> [...] The maximum gravimetric energy density (that is, the energy available per unit mass) they measured was 2.1 MJ/kg (583 Wh/kg). While this is lower than the most advanced lithium-ion batteries, which last year hit a record of 700 Wh/kg, it is much higher than commercial versions, which top out at around 280 Wh/kg. The SWCNT ropes also maintained their performance over at least 450 twist-release cycles
"Giant nanomechanical energy storage capacity in twisted single-walled carbon nanotube ropes" (2024) https://www.nature.com/articles/s41565-024-01645-x :
> Abstract: A sustainable society requires high-energy storage devices characterized by lightness, compactness, a long life and superior safety, surpassing current battery and supercapacitor technologies. Single-walled carbon nanotubes (SWCNTs), which typically exhibit great toughness, have emerged as promising candidates for innovative energy storage solutions. Here we produced SWCNT ropes wrapped in thermoplastic polyurethane elastomers, and demonstrated experimentally that a twisted rope composed of these SWCNTs possesses the remarkable ability to reversibly store nanomechanical energy. Notably, the gravimetric energy density of these twisted ropes reaches up to 2.1 MJ kg−1, exceeding the energy storage capacity of mechanical steel springs by over four orders of magnitude and surpassing advanced lithium-ion batteries by a factor of three. In contrast to chemical and electrochemical energy carriers, the nanomechanical energy stored in a twisted SWCNT rope is safe even in hostile environments. This energy does not deplete over time and is accessible at temperatures ranging from −60 to +100 °C.
Ask HN: How much would it cost to build a RISC CPU out of carbon?
RFC: Estimates on program costs to:
Evaluate carbon-based alternatives for fabrication of electronic digital and quantum computers
Evaluate scalable production processes for graphene-based transistors and other carbon-based nanofabrication capabilities
Evaluate supply chain challenges in semiconductor and superconductor manufacturing and design
Invest in developing chip fabrication capabilities that do not require rare earths in order to eliminate critical supply chain risks
> RFC: Estimates on program costs to:
> Evaluate carbon-based alternatives for fabrication of electronic digital and quantum computers
> Evaluate scalable production processes for graphene-based transistors and other carbon-based nanofabrication capabilities
> Evaluate supply chain challenges in semiconductor and superconductor manufacturing and design
> Invest in developing chip fabrication capabilities that do not require rare earths in order to eliminate critical supply chain risks
- "Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024-06) https://news.ycombinator.com/item?id=40719725
Group1 Unveils First Potassium-Ion Battery in 18650 Format
> Leveraging Kristonite™, Group1's flagship product, a uniquely engineered 4V cathode material in the Potassium Prussian White (KPW) class, this battery enables the best combination of performance, safety, and cost when compared to LiFePO4 (LFP)-based LIBs and Sodium-ion batteries (NIBs).
> The 18650 form factor is the most widely used and designed cell formats. These new 18650 batteries use commercial graphite anodes, separators, and electrolyte formulations comprised of commercially sourced components. They offer superior cycle life and excellent discharge capability, and they operate at 3.7V. The product release exceeds initial performance expectations and demonstrates a practical path to achieve a gravimetric energy density of 160-180Wh/kg, which is standard for LFP-LIB.
https://interestingengineering.com/energy/first-18650-potass... :
> 3.7v, 18650
How much less bad for plants is a Potassium Ion battery compared with NiMH, NiCD, Li-ion, LiFePO4, and Sodium-ion batteries?
FWIU salt kills plants, lithium kills plants, NiMH and NiCD shouldn't be disposed of in the trash either, and just Potassium (K) is a macronutrient essential for plant growth.
There are fertilizer shortages. Is there a potassium fertilizer shortage?
"Potassium depletion in soil threatens global crop yields" (2024) https://phys.org/news/2024-02-potassium-depletion-soil-threa...
"Global food security threatened by potassium neglect" (2024) https://www.nature.com/articles/s43016-024-00929-8 :
> Inadequate potassium management jeopardises food security and freshwater ecosystem health. Potassium, alongside nitrogen and phosphorus, is a vital nutrient for plant growth1 and will be fundamental to achieving the rapid rises in crop yield necessary to sustain a growing population. Sustainable nutrient management is pivotal to establishing sustainable food systems and achieving the UN Sustainable Development Goals. While momentum to deliver nitrogen [2] and phosphorus sustainability [3] builds, potassium sustainability has been chronically neglected. There are no national or international policies or regulations on sustainable potassium use equivalent to those for nitrogen and phosphorus. Calls to mitigate rising potassium soil deficiency by increasing potassium inputs in arable agriculture are understandable [4,5]. However, substantial knowledge gaps persist regarding the potential environmental impacts of such interventions. We outline six proposed actions that aim to prevent crop yield declines due to soil potassium deficiency, safeguard farmers from price volatility in potash (i.e. mined potassium salts used to make fertiliser) and address environmental and ecosystem concerns associated with potash mining and increased potassium fertiliser use.
From https://www.pure.ed.ac.uk/ws/portalfiles/portal/409660541/Br... :
> 1. Review current potassium stocks and flows
> 2. Establish capabilities for monitoring and predicting potassium price fluctuations
> 3. Help farmers maintain sufficient soil potassium levels
> 4. Evaluate the environmental effects of potash mining and increased potassium162 application to identify sustainable practices
> 5. Develop a global strategy to transition to a circular potassium economy (*)
> 6. Accelerate intergovernmental cooperation as a catalyst for change
How much less bad for oceans are Potassium Ion batteries compared with NiMH, NiCD, Li-ion, LiFePO4, and Sodium-ion batteries?
USB Sniffer Lite for RP2040
Can this be used to capture the packets of another computer?
The USB packets? Yes.
So it should be possible to bridge wired USB wirelessly over 802.11n (2.4 GHz) and Bluetooth BLE 5.2 with two RP2040w, or one and software on a larger computer on the other side with e.g. google/bumble
Why go through all this trouble if you already have physical access to the computer?
Intel took billions from the CHIPS Act, and gave nothing back
I respect and like Intel, but even with my biases I am nonetheless mildly annoyed as a tax payer that we paid so much into CHIPS to still end up losing the silicon wars to Taiwan/China.
I want my tax dollars back.
We gave them the money this year. They are building fabs. This stuff takes years.
One fab can cost $20 billion.
This past year, they were expected to piss the money we gave them away on a factory in high-risk genocideland but not in the USA.
How could they have prevented those expenses?
They've finally introduced lower power gaming chips.
Hopefully they build fabs here, and we alleviate the supply chain blockade and competitive obstruction by fabricating chips out of graphene and other forms of carbon.
About Google Chrome's "This extension may soon no longer be supported"
I don’t get why we need only a declarative manifest format to specify blocking rules.
Chrome has the ability to run code in a sandboxed environment with no external network access. This could be done in JS, or it could be WASM.
So, why can’t they leverage that? Ublock receives the page being visited and it needs to return an array of elements to block. The browser can do the replacement and element stripping itself. This is the approach Safari takes on iOS for content blocking extensions.
Instead, Chrome went with a “limited set of static patterns in JSON format” approach, which rules out or limits a lot of use cases.
But… why? The obvious answer is because they don’t want adblockers, but this seems a bit too obvious. Is there some technical reason I’m missing for this?
"A look inside the BPF verifier [lwn]" https://news.ycombinator.com/item?id=41135371
eBPF: https://en.wikipedia.org/wiki/EBPF
"How are eBPF programs written?" https://ebpf.io/what-is-ebpf/#how-are-ebpf-programs-written :
> In a lot of scenarios, eBPF is not used directly but indirectly via projects like Cilium, bcc, or bpftrace which provide an abstraction on top of eBPF and do not require writing programs directly but instead offer the ability to specify intent-based definitions which are then implemented with eBPF.
> If no higher-level abstraction exists, programs need to be written directly. The Linux kernel expects eBPF programs to be loaded in the form of bytecode. While it is of course possible to write bytecode directly, the more common development practice is to leverage a compiler suite like LLVM [clang] to compile pseudo-C code into eBPF bytecode
WASM eBPF for adblocking
A look inside the BPF verifier [lwn]
> The BPF verifier is, fundamentally, a form of static analysis. It determines whether BPF programs submitted to the kernel satisfy a number of properties, such as not entering an infinite loop, not using memory with undefined contents, not accessing memory out of bounds, only calling extant kernel functions, only calling functions with the correct argument types, and others. As Alan Turing famously proved, correctly determining these properties for all programs is impossible — there will always be programs that do not enter an infinite loop, but which the verifier cannot prove do not enter an infinite loop. Despite this, the verifier manages to prove the relevant properties for a large amount of real-world code.
[ Halting problem: https://en.wikipedia.org/wiki/Halting_problem ]
> Some basic properties, such as whether the program is well-formed, too large, or contains unreachable code, can be correctly determined just by analyzing the program directly. The verifier has a simple first pass that rejects programs with these problems. But the bulk of the verifier's work is concerned with determining more difficult properties that rely on the run-time state of the program [dynamic analysis].
Ask HN: What's the consensus on "unit" testing LLM prompts?
LLMs are notoriously non deterministic, which makes it hard for us developers to trust them as a tool in a backend, where we usually expect determinism.
I’m in a situation where using an LLM makes sense from a technical perspective, but I’m wondering if there are good practice on testing, besides manual testing:
- I want to ensure my prompt does what I want 100% of the time
- I want to ensure I don’t get regressions as my prompt evolve, or when updating the version of the LLM I use, or even if I switch to another LLM
The ideas I have in mind are:
- forcing the LLM to return JSON with a strict definition
- running a fixed set of tests periodically with my prompt and checking I get the expected result
Are there specificities with LLM prompt testing I should be aware of? Are some good practices emerging?
From "Asking 60 LLMs a set of 20 questions" (2023) https://news.ycombinator.com/item?id=37445401#37451493 : PromptFoo, ChainForge, openai/evals, TheoremQA,
https://news.ycombinator.com/item?id=40859434 re: system prompts
"Robustness of Model-Graded Evaluations and Automated Interpretability" https://www.lesswrong.com/posts/ZbjyCuqpwCMMND4fv/robustness...
"Detecting hallucinations in large language models using semantic entropy" (2024) https://news.ycombinator.com/item?id=40769496
AI-enabled solar panel installation robot lifts, places, and attaches
- "Robot-Installed Solar Panels Cut Costs by 50%" (2012) https://spectrum.ieee.org/robotinstalled-solar-panels-cut-co... :
> Here's the executive summary, since I've never done an executive summary before and it sounds fancy: using robots to set up a 14 megawatt solar power plant can potentially cuts costs from $2,000,000 to $900,000, while being constructed in eight times faster with only three human workers instead of 35.
> So there you go! If you're still reading, we can tell you a little bit about the robot that performs this incredible feat of engineering efficiency: it costs just under a million bucks, but it's built from off-the-shelf parts and in continuous use will supposedly pay for itself in either no time at all or less than a year, whichever comes last.
> [...] The robot itself has a mobile base that runs on tank treads, and a robot arm grips huge 145 watt panels one at a time and autonomously positions them in just the right spot on a pre-installed metal frame. Humans follow along behind, adding fasteners and making electrical connections,
- "Launch HN: Charge Robotics (YC S21) - Robots that build solar farms" (2022) https://news.ycombinator.com/item?id=30780455 :
> We're using a two-part robotic system to build this racking structure. First, a portable robotic factory placed on-site assembles sections of racking hardware and solar modules. This factory fits inside a shipping container. Robotic arms pick up solar modules from a stack and fasten them to a long metal tube (the "torque tube"). Second, autonomous delivery vehicles distribute these assembled sections into the field and fasten them in place onto target destination piles.
Show HN: Trayce – Network tab for Docker containers
Trayce (https://github.com/evanrolfe/trayce_gui) is an open source desktop application which monitors HTTP(S) traffic to Docker containers on your machine. It uses EBPF to achieve zero-configuration sniffing of TLS-encrypted traffic.
As a backend developer I wanted something which was similar to Wireshark or the Chrome network tab, but which intercepted requests & responses to my containers for debugging in a local dev environment. Wireshark is a great tool but it seems more geared towards lower level networking tasks. When I'm developing APIs or microservices I dont care about packets, I'm only concerned with HTTP requests and their responses. I also didn't want to have to configure a pre-shared master key to intercept TLS, I wanted it to work out-of-the-box.
Trayce is in beta phase so feedback is very welcome, bug reports too. The frontend GUI is written in Python with the QT framework. The TrayceAgent which is what does the intercepting of traffic is written in Go and EBPF.
Looks very cool. I think you should write a docker extension https://www.docker.com/products/extensions/
As long as it's in addition to the standalone app. Not everybody uses Docker Desktop.
Podman Desktop supports Docker Desktop extensions and also its own extension API. https://podman-desktop.io/extend
"Developing a Podman Desktop extension" https://podman-desktop.io/docs/extensions/developing
This tool has really cool potential!
Just one problem I noticed imminently that prevents me from using this, the docker agent container[1] isn't multi-architecture, this will be an issue on Apple Silicon devices. This is something I have some experience setting up if you are looking for help, though will take some research to figure out how to get going in github actions etc.
1: https://github.com/evanrolfe/trayce_agent/
EDIT: quick search found this post, tested on a side project repo it works great: https://depot.dev/blog/multi-platform-docker-images-in-githu...
Good point, thanks. Its only ever been tested on a Mac with an Intel chip. I will try and sort this out ASAP!
I did submit a PR with a partial implementation, well the Docker and Github side. I am useless at the C and low level code.
Who has CI build runners for the given architectures?
It looks[1] like github actions only run on amd64 hosts, so if you use a platform matrix like in this example[2] I am fairly certain it is running under qemu just based on the fact that is takes roughly 10 times longer to run. I am aware of Blaze[3] has arm64 runners.
If you are willing to pay for it, you can also setup a runner that uses AWS Graviton EC2 instances[4], we do that at my workplace for our multi architecture builds.
1: https://docs.github.com/en/actions/using-github-hosted-runne...
2: https://docs.docker.com/build/ci/github-actions/multi-platfo...
Genetically synthesized supergain broadband wire-bundle antenna
Genetic optimisation seems like an odd choice here. The fundamental operation of genetic optimisation is crossing over - taking two solutions, and picking half the parameters from one, and half from the other. This makes sense if parameters make independent contributions to fitness. But the case of antenna design, surely that's exactly what they don't? The position of each element relative to the other elements matters enormously. If you had two decent antenna designs, cut both in half, and glued the halves together, i would expect that to be a crappy antenna.
It would be interesting to see an evaluation of genetic optimisation vs more conventional techniques like simplex or BOBYQA, or anything else in NLopt.
GA methods also may converge without expert bias in parameters but typically after more time or generations of mutation, crossover, and selection.
Why would the fitness be lower with mutation, crossover, and selection?
Manual optimization is selection: the - possibly triple-blind - researcher biases the analysis by choosing models, hyperparameters, and parameters. This avoids search cost, but just like gradient descent it can result in unexplored spaces where optima may lurk.
There's already GA Genetic Algorithms deep space antenna design success story.
But there are also dielectric and Rydberg antennae, which don't at all need to be as long as the radio wavelength. How long would that have taken the infinite monkeys GA?
/?hnlog: antenna, Rydberg, dielectric: https://westurner.github.io/hnlog/#comment-38054784
Cylinder sails promise up to 90% fuel consumption cut for cargo ships
EVM Query Language: SQL-Like Language for Ethereum
- How does this compare to the BigQuery copy of the chain?
"Ethereum Mainnet example queries" https://cloud.google.com/blockchain-analytics/docs/example-e...
- from "Show HN: A Database Generator for EVM with CRUD and On-Chain Indexing" https://news.ycombinator.com/item?id=36144368 https://github.com/KenshiTech/SolidQuery ; rdflib-leveldb ?
- re: affording decentralized CT Certificate Transparency queries: https://news.ycombinator.com/item?id=30782522 :
> Indexers are node operators in The Graph Network that stake [GRT] in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn query fees that are rebated according to an exponential rebate function.
I prefer rST to Markdown
A Table of Contents instruction that works across all markdown implementations would be an accomplishment.
OTOH, Markdown does not require newlines between list elements; which is more compact but limits the HTML that can be produced from the syntax.
MyST Markdown adds Sphinx roles and directives to Markdown.
nbsphinx adds docutils (RST) support back to Jupyter Notebooks. (IPython notebook originally supported RST and Markdown.)
Researchers trap atoms, force them to serve as photonic transistors
> "Using state-of-the-art nanofabrication instruments in the Birck Nanotechnology Center, we pattern the photonic waveguide in a circular shape at a diameter of around 30 microns (three times smaller than a human hair) to form a so-called microring resonator. Light would circulate within the microring resonator and interact with the trapped atoms," adds Hung.
- -459.67 F, Cesium, Tractor beam
"Trapped Atoms and Superradiance on an Integrated Nanophotonic Microring Circuit" (2024) : https://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.03... :
> Interfacing cold atoms with integrated nanophotonic devices could offer new paradigms for engineering atom-light interactions and provide a potentially scalable route for quantum sensing, metrology, and quantum information processing. However, it remains a challenging task to efficiently trap a large ensemble of cold atoms on an integrated nanophotonic circuit. Here, we demonstrate direct loading of an ensemble of up to 70 atoms into an optical microtrap on a nanophotonic microring circuit. Efficient trap loading is achieved by employing degenerate Raman-sideband cooling in the microtrap, where a built-in spin-motion coupling arises directly from the vector light shift of the evanescent-field potential on a microring. Atoms are cooled into the trap via optical pumping with a single free space beam. We have achieved a trap lifetime approaching 700 ms under continuous cooling. We show that the trapped atoms display large cooperative coupling and superradiant decay into a whispering-gallery mode of the microring resonator, holding promise for explorations of new collective effects. Our technique can be extended to trapping a large ensemble of cold atoms on nanophotonic circuits for various quantum applications. https://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.03...
Rediscovering Transaction Processing from History and First Principles
Joran from TigerBeetle here! Really stoked to see a bit of Jim Gray history on the front page and happy to dive into how TB's consensus and storage engine implements these ideas (starting from main! https://github.com/tigerbeetle/tigerbeetle/blob/main/src/tig...).
TPS: Transactions Per Second: https://en.wikipedia.org/wiki/Transactions_per_second
"A Measure of Transaction Processing Power" (1985)
"A measure of transaction processing 20 years later" (2005) https://arxiv.org/abs/cs/0701162 .. https://scholar.google.com/scholar?cluster=11019087883708435...
About the cover sheet on those TPS reports.
Max throughput, max efficiency,
Network throughput: https://en.wikipedia.org/wiki/Network_throughput
"Is measuring blockchain transactions per second (TPS) stupid in 2024? Big Questions" https://cointelegraph.com/magazine/blockchain-transactions-p... :
> focusing on the raw TPS number is a bit like “counting the number of bills in your wallet but ignoring that some are singles, some are twenties, and some are hundreds.”
https://chainspect.app/dashboard describes each of their metrics: Real-Time TPS (tx/s), Max Recorded TPS (tx/s), Max Theoretical TPS (tx/s), Block Time (s), Finality (s)
USD/day probably doesn't predict TPS; because the Average transaction value is higher on networks with low TPS.
Other metrics: FLOPS, FLOPS/WHr, TOPS, TOPS/WHr, $/OPS/WHr
And then there's Uptime; or losses due to downtime (given SLA prorated costs)
> "A measure of transaction processing 20 years later (2005)"
This is one of my favorites. Along with https://www.microsoft.com/en-us/research/wp-content/uploads/... (also 20 years later) which has more detail.
> USD/day probably doesn't predict TPS; because the Average transaction value is higher on networks with low TPS.
Exactly. If we only look at USD/day we might not see the trend in transaction volume.
What's happened is like a TV set going from B&W to full color 4K. If you look at the dimensions of the TV, it's pretty much the same. But the number of pixels (txns) is increasing, with their size (value) decreasing, for significantly higher resolution across sectors. And this is directly valuable because higher resolution (i.e. transactionality) enables arbitrage/efficiency.
I don't know what the frontier is on TPS and value in financial systems. Does HFT or market-making really add that much value compared to direct capital investment and the trading frequency of an LTSE Long Term Stock Exchange?
High tx fees (which are simply burnt) disincentive HFT, which may also be a waste of electricity like PoW, in terms of allocation.
Low tx fees increase the likelihood of arbitrage that reduces the spread between prices (on multiple exchanges) and then what effect on volatility and real value?
But those are market dynamics, not general high-TPS system dynamics.
You can see the trend better in the world of instant payments, if you look to India's UPI or Brazil's PIX, how they've taken over from cash and are disrupting credit card transactions, with this starting to spread to Europe and the US: https://www.npci.org.in/what-we-do/upi/product-statistics
However, it's not limited to fintech.
For example, in some countries, energy used to be transacted once a month. Someone would come to your house or business, read your meter, and send you a bill for your energy usage. With the world moving away from coal to clean energy and solar, this is changing because the price of energy now follows the sun. So energy providers are starting to price energy, not once a month, but every 30 minutes. In other words, this 1440x increase in transactions volume enables cleaner more efficient energy pricing.
You see the same trend also in content and streaming. You used to buy an album, then a song, and now you stream on Spotify and Apple. Same for movies and Netflix.
The cloud is no different. Moving from colocation, to dedicated, to per-hour and then per-minute instance pricing, and now serverless.
And then Uber and Airbnb have done much the same for car rentals or house rentals: take an existing business model and make it more transactional.
RestrictedPython
A Python interpreter with costed opcodes would be smart.
But then still there is no oracle to predict whether user-supplied [RestrictedPython] code never halts; so, without resource quotas from VMs or Containers, there is unmitigated risk of resource exhaustion from user-supplied and developer-signed code.
IIRC there are one-liners that DOS Python and probably RestrictedPython too.
JupyterHub Spawners and Authenticators were created to spin up [k8s] containers on demand - with resource quotas - for registered users with e.g. JupyterHub and unregistered users with BinderHub.
Someone already has a presentation on how it's impossible to fully sandbox Python without OS process isolation?
You can instead have them run their code on their machine with Python in WASM in a browser tab; but should you trust the results if you do not test for convergence by comparing your local output with other outputs? Consensus with a party of one or an odd number of samples.
So, reproducibility as scientific crisis: "it worked on my machine" or "it works on the gitops CI build servers" and "it works in any browser" and the output is the same when the user runs their own code locally as when it is run on the server [with RestrictedPython].
I heard it was actually impossible to sandbox Python (and most or all other languages) with itself?
/? python sandbox RestrictedPython https://google.com/search?q=python+sandbox+RestrictedPython
> But then still there is no oracle to predict whether user-supplied [RestrictedPython] code never halts; so, without resource quotas from VMs or Containers, there is unmitigated risk of resource exhaustion from user-supplied and developer-signed code
This is actually something I was interested in a while ago, but in Scheme. For example GNU Guile has such a facility built in [0], though with a caveat that if you're not careful what you expose you could still make yourself vulnerable to DoS attack (the memory exhaustion kind). But if you don't expose any facility that allows the untrusted user to make large allocations at a time (in a single call) you should be fine (fingers crossed). I don't see a reason why the same mechanics couldn't be implemented in Python.
[0] https://www.gnu.org/software/guile/manual/html_node/Sandboxe...
Edit: in some sense this restricted python stuff also reminds me of Safe Haskell (extension) [1], which came out a bit ahead of it's time and by this point almost forgotten about. Might become relevant again in the future.
[1] https://begriffs.com/posts/2015-05-24-safe-haskell.html - better overview than the wiki page
Java implementations have run-time heap allocator limits configurable with the -Xms and -Xmx options for minimum and maximum heap size, and -Xss for per-thread stack size:
java -jar -Xms1024M -Xmx2048M -Xss1M example.jar
RAM and CPU and network IO cgroups are harder limits than a process's attempts to bound its own allocation with the VM.
TIL about hardened_malloc. Python doesn't have hardened_malloc, and IDK how configurable hardened_malloc is in terms of self-imposed process resource limits. FWIU hardened_malloc groups and thereby contains allocations by size. https://github.com/GrapheneOS/hardened_malloc
There is a reason that EVM and eWASM have costed opcodes and do not have socket libraries (blocking or async).
The default sys.setrecursionlimit() is 1000; so, 1000 times the unbounded stack size per frame: https://docs.python.org/3/library/sys.html#sys.setrecursionl...
Most (all?) algorithms can be rewritten with a stack instead of recursion, thereby avoiding per-frame stack overhead in languages without TCO Tail-Call Optimization like Python.
Kata containers or Gvisor or just RestrictedPython that doesn't run until it's checked into git [and optionally signed]?
setup.py could or should execute in a RestrictedPython sandbox if run as root (even in a container).
Starlark and Skylark are also restricted subsets of Python, for build configuration with Blaze, Bazel, Buck2, Caddy,
"Starlark implementations, tools, and users" https://github.com/bazelbuild/starlark/blob/master/users.md
Ask HN: Am I crazy or is Android development awful?
TL;DR - what I can do in 10 minutes on a desktop python app (windows or Linux, both worked fine) seems near impossible as an Android app.
I have a simple application to prototype: take a wired USB webcam, display it on the screen on a computer device (ideally a small device/screen), and draw a few GUI elements on top of it.
Using Python scripting and OpenCV, I had a working cross-compatible script in 10 minutes, for both Linux and Windows.
Then I realized I'd love this to work on an Android phone. I have devices with USB OTG, and a USB-C hub for the webcam. I confirmed the hardware setup working using someone else's closed source app.
However the development process has been awful. Android Studio has so much going on for a 'Hello World', and trying to integrate various USB webcam libraries has been impossible, even with AI assistants and Google guiding me. Things to do with Gradle versions or Kotlin versions being wrong between the libraries and my Android studio; my project not being able to include external repos via dependencies or toml files, etc.
In frustration I then tried a few python-to-android solutions, which promise to take a python script and make an APK. I tried: Kivy + python for android, and then Beeswax or Briefcase (may have butchered names slightly). Neither would build without extremely esoteric errors that neither me nor GPT had any chance to fix.
Well, looks like modern mobile phones are not a great hacker's playground, huh?
I guess I will go for a raspberry pi equivalent. In fact I already have tested my script on a RPi and it's just fine. But what a waste, needing to use a new computer module, screen display, and battery, when the smartphone has all 3 components nicely set up already in one sleek package.
Anyways that's my rant, wanted to get others' takes on Android (or smartphone in general) dev these days, or even some project advice in case anyone has done something similar connecting a wired webcam to a smartphone.
/? termux USB webcam: https://www.google.com/search?q=termux+usb+webcam
Termux was F-droid only, but 4 years later is back on the Play Store: https://github.com/termux-play-store#current-status-for-user...
Termux has both glibc and musl libc. Android has bionic libc.
One time I got JupyterLab to run on Android in termux with `proot` and pip. And then the mobile UI needed work in a WebView app or just a browser tab t. Maybe things would port back from Colab to JupyterLab.
conda-forge and Linux arm64 packages don't work on arm64 Android devices, so the only option is to install the *-dev dependencies and wait for compilation to finish on the Android device.
Waydroid is one way to work with Android APKs in a guest container on a Linux host.
That Android Studio doesn't work on Android or ChromiumOS without containers (that students can't have either).
When you get x on turmux working you see what mobile development could have been and weep for the fallen world we live in.
containers/podman > [Feature]: Android support: https://github.com/containers/podman/discussions/17717 :
> There are docker and containerd in termux-packages. https://github.com/termux/termux-packages/tree/master/root-p...
But Android 13+ supports rootless pKVM VMs, which podman-machine should be able to run containers in; (but only APK-installed binaries are blessed with the necessary extended filesystem attributes to exec on Android 4.4+ with SELinux in enforcing mode.)
- Android pKVM: https://source.android.com/docs/core/virtualization/architec... :
> qemu + pKVM + podman-machine:
> The protected kernel-based virtual machine (pKVM) is built upon the Linux KVM hypervisor, which has been extended with the ability to restrict access to the payloads running in guest virtual machines marked ‘protected’ at the time of creation.
> KVM/arm64 supports different execution modes depending on the availability of certain CPU features, namely, the Virtualization Host Extensions (VHE) (ARMv8.1 and later).
- "Android 13 virtualization lets [Pixel >= 6] run Windows 11, Linux distributions" (2022) https://news.ycombinator.com/item?id=30328692
It's faster to keep a minimal container hosting VM updated.
So, podman-machine for Android in Termux might help solve for development UX on Android (and e.g. Android Studio on Android).
podman-machine: https://docs.podman.io/en/latest/markdown/podman-machine.1.h...
Wait so you're telling me I can run podman on android?
Looks like it's almost possible to run podman-machine on Android; and it's already possible to manually create your qcow for the qemu on Android and then run containers in that VM: https://github.com/cyberkernelofficial/docker-in-termux
The Puzzle of How Large-Scale Order Emerges in Complex Systems
This has been a topic of interest for me for almost a decade so far.
See also: non-linear dynamics, dynamical systems
The study of Complexity in the US Sciences is fairly recent, with the notable kickoff of the Santa Fe Institute [1]. They have all the resources there, from classes to scientific conferences.
A few book recommendations I've picked up over the years:
- Lewin, Complexity: Life at the Edge of Chaos [2] (recommended for a gentle introduction with history)
- Gleick, Chaos: Making a New Science [3]
- Holland, Emergence: From Chaos To Order [4]
- Holland, Signals and Boundaries: Building Blocks for Complex Adaptive Systems [5]
- Holland, Hidden Order: How Adaptation Builds Complexity [6] (unread but part of the series)
- Miller, Complex Adaptive Systems: An Introduction to Computational Models of Social Life [7]
- Strogatz, Nonlinear Dynamics And Chaos: With Applications To Physics, Biology, Chemistry, And Engineering [11] (edit: forgot i had this on my desk)
Also a shoutout to Stephen Wolfram, reading one of his articles [8] about underlying structures while stuck at an airport proved to be a pivotal moment in my life.
- Wolfram's New Kind of Science, as an exploration of simple rules computed to the point of complex behavior. [9]
- Wolfram's Physics Project, a later iteration on NKS ideas with some significant and thorough publication of work. [10]
Links:
[2] https://www.amazon.com/gp/product/0226476553
[3] https://www.amazon.com/gp/product/0143113453
[4] https://www.amazon.com/gp/product/0738201421
[5] https://www.amazon.com/gp/product/0262525933
[6] https://www.amazon.com/Hidden-Order-Adaptation-Builds-Comple...
[7] https://www.amazon.com/gp/product/0691127026
[8] https://writings.stephenwolfram.com/2015/12/what-is-spacetim...
[9] https://www.wolframscience.com/nks/
[10] https://wolframphysics.org/
[11] https://www.amazon.com/Nonlinear-Dynamics-Chaos-Applications...
About 30 years for me. As my username might suggest.
To this already excellent list I would add that the Santa Fe Institute (mentioned above) is continually putting out new work and papers on this topic.
Examples here: https://www.santafe.edu/research/results/books
Complex, nonlinear - possibly adaptive - systems with or without convergence but emergence, and structure!
Convergence: https://en.wikipedia.org/wiki/Convergence > STEM
Emergence: https://en.wikipedia.org/wiki/Emergence
Stability: https://en.wikipedia.org/wiki/Stability
Stability theory: https://en.wikipedia.org/wiki/Stability_theory
Glossary of systems theory: https://en.wikipedia.org/wiki/Glossary_of_systems_theory
Ask HN: Does collapse of the wave function occur with an AI observer?
Pnut: A C to POSIX shell compiler you can trust
If you are wondering how it handles C-only functions.. it does not.
open(..., O_RDWR | O_EXCL) -> runtime error, "echo "Unknow file mode" ; exit 1"
lseek(fd, 1, SEEK_HOLE); -> invalid code (uses undefined _lseek)
socket(AF_UNIX, SOCK_STREAM, 0); -> same (uses undefined _socket)
looking closer at "cp" and "cat" examples, write() call does not handle errors at all. Forget about partial writes, it does not even return -1 on failures.
"Compiler you can Trust", indeed... maybe you can trust it to get all the details wrong?
There seems to be libc in the repo but many functions are TODO https://github.com/udem-dlteam/pnut/tree/main/portable_libc
Otherwise the builtins seems to be here https://github.com/udem-dlteam/pnut/blob/main/runtime.sh
FYI all your functions are not "C functions", but rather POSIX functions. I did not expect it to be complete, but it's still impressive for what it is.
There are Linux ports of the plan9 `syscall` binary, which is presumably necessary to implement parts of libc with shell scripts: https://stackoverflow.com/questions/10196395/os-system-calls...
I don't remember there being a way to keep a server listening on a /dev/tcp/$ip/$port port, for sockets from shell scripts with shellcheck at least
How an 1803 Jacquard Loom Led to Computer Technology [video]
From "Calling your computer a Turing Machine is controversial" https://news.ycombinator.com/item?id=37709009 :
> Turing was familiar with the Jacquard loom for weaving patterns on punch cards and Babbage's, and was then tasked with brute-forcing a classical cipher.
And before the Jacquard loom was the music box.
Music box > Timeline: https://en.wikipedia.org/wiki/Music_box#Timeline
Jacquard machine > Importance in computing: https://en.wikipedia.org/wiki/Jacquard_machine#Importance_in...
Mechanical computer > Examples: https://en.wikipedia.org/wiki/Mechanical_computer#Examples :
> Antikythera: c. 100BC (a mechanical astronomical clock)
SQLite-jiff: SQLite extension for timezones and complex durations
What I'd find really interesting in this space, would be coalescing around a "standard" storage format and API.
This definitely fills a need [1], and I'd easily port something like this to my Go bindings as needed (no Rust necessary).
It's encouraging that the storage format is a (recently published) RFC 9557 [2]. Not so sure about having jiff on all the function names.
Making SQLite easy to extend (Alex's Rust FFI, the APSW Python bindings, etc) really is a treasure trove.
"RFC 9557: Date and Time on the Internet: Timestamps with Additional Information" https://www.rfc-editor.org/rfc/rfc9557.html
1996-12-19T16:39:57-08:00[America/Los_Angeles]
> The offset -00:00 is provided as a way to express that "the time in UTC is known, but the offset to local time is unknown" 1996-12-19T16:39:57-00:00
1996-12-19T16:39:57Z
> Furthermore, applications might want to attach even more information to the timestamp, including but not limited to the calendar system in which it should be represented. 1996-12-19T16:39:57-08:00[America/Los_Angeles][u-ca=hebrew]
1996-12-19T16:39:57-08:00[_foo=bar][_baz=bat]
Astropy supports the Julian calendar – circa Julius Caesar (~25BC), born by Caesarean section (an Eastern procedure)), and also astronomical numbering which has a Year Zero. [1]There is still not a zero in Roman numerals; there's "nulla" but no zero. Modern zero is notated with the Arabic numeral 0.
[1] Year Zero, calendaring systems: https://wrdrd.github.io/docs/consulting/knowledge-engineerin...
astropy.time > Time Formats: https://docs.astropy.org/en/latest/time/#time-format
Which calendar is the oldest?
List of calendars: https://en.wikipedia.org/wiki/List_of_calendars
Epoch > Calendar eras: https://en.wikipedia.org/wiki/Epoch#Calendar_eras
--
Are string tags at the end of the datetime that indexable?
Shouldn't there be #LinkedData URLs or URIs in an RDFS vocabulary for IANA/olsen and also for calendaring systems?
E.g. Schema.org/dateCreated has a range of schema.org/DateTime, which supports ISO8601, which also specifies timezone Z (but not -00:00, as the RFC mentions).
Astounding that there's been no solution for calendar year date offsets on computers. Are there notations for indicating which system, or has everyone on earth also always assumed that bare integer years are relative to their preferred system?
Somewhere there's a chart of how recorded human history is only like 10K out of 900K (?) years of hominids of earth, through ice ages and interglacials like the Holocene.
Systemd Talks Up Automatic Boot Assessment in Light of the CrowdStrike Outage
> The only problem? Major Linux distributions aren't yet onboard with using the Automatic Boot Assessment feature.
systemd.io/AUTOMATIC_BOOT_ASSESSMENT/ : https://systemd.io/AUTOMATIC_BOOT_ASSESSMENT/
From https://news.ycombinator.com/item?id=29995566 :
> Which distro has the best out-of-the-box output for:?
systemd-analyze security
> Is there a tool like `audit2allow` for systemd units?And also automatic variance in boot sequences with timeouts.
Where does it explain that a systemd service unit is always failing at boot?
From https://news.ycombinator.com/item?id=41022664 re: potential boot features: mandatory dead man's switch failover to the next kernel+initrd unless [...]:
> For EFI you could probably set BootNext to something else early on, in combination with some restarting watchdog. GRUB can store state between boots https://www.gnu.org/software/grub/manual/grub/html_node/Envi... and has "default" and "fallback" vars.
Rrweb – record and replay debugger for the web
RR's trick is to record any sources of nondeterminism, but otherwise execute code. One consequence is that it must record the results of syscalls.
Does Rrweb do the same for browser APIs and web requests?
The page mentions pixel-perfect replays, but does that require running on the same browser, exact same version, with the exact same experiments/feature flags enabled?
RRWeb only records changes to the DOM, it doesn't actually replay the JavaScript that makes those changes happen. So you see exactly what the user sees, but you're not able to inspect memory or anything like that.
There are a few caveats since not everything is captured in the DOM, such as media playback state and content in canvases. The user may also have some configurations that change their media queries, such as dark mode or prefers reduced motion.
Edit: and yes, to your point, browser differences would also render differently.
What about debugging and recording stack traces too?
"DevTools Protocol API docs—its domains, methods, and events": https://github.com/ChromeDevTools/debugger-protocol-viewer .. https://chromedevtools.github.io/devtools-protocol/
ChromeDevTools/awesome-chrome-devtools > Chrome Debugger integration with Editors: https://github.com/ChromeDevTools/awesome-chrome-devtools#ch...
DAP: Debug Adapter Protocol > Implementations: https://microsoft.github.io/debug-adapter-protocol/implement... :
- Microsoft/vscode-js-debug: https://github.com/microsoft/vscode-js-debug :
> This is a DAP-based JavaScript debugger. It debugs Node.js, Chrome, Edge, WebView2, VS Code extensions, and more. It has been the default JavaScript debugger in Visual Studio Code since 1.46, and is gradually rolling out in Visual Studio proper.
- awto/effectfuljs: https://github.com/awto/effectfuljs/tree/main/packages/vscod... :
> EffectfulJS Debugger: VSCode debugger for JavaScript/TypeScript. Besides the typical debugger's features it offers: Time-traveling, Persistent state, Platform independence, Programmable API, Hot mocking of functions or even parts of a function, Hot code swapping, Data breakpoints. This works by instrumenting JavaScript/TypeScript code and injecting necessary debugging API calls into it. It is implemented using EffectfulJS.
https://github.com/awto/effectfuljs : @effectful/debugger , @effectful/es-persist: https://github.com/awto/effectfuljs/tree/main/packages/es-pe...
I wish we had an rr for nodejs.
Do you mean a time travel debugger? I believe it would be an awesome feature to be able to record & replay program execution. I imagine the recordings would be huge in size, as there are many more degrees of freedom on backend than it is on frontend.
Today I found EffectfulJS Debugger, which is a DAP debugger with time travel and state persistence for JS: https://news.ycombinator.com/item?id=41036985
Don't snipe me in space-intentional flash corruption for STM32 microcontrollers
> Let’s first take a look at what the bootloader actually does. It manages 3 slots for operating system images, with each having around 500 KB reserved for it. Additionally, 2 redundant metadata structs are stored on different flash pages. During an update, one slot is overwritten, and then metadata is adjusted. We are resilient against power failures at any point, and as long as at least one image slot contains an operating system image, we can boot. [...]
> In our bootloader we can now enable a custom boot mode that ensures that if at least one image is bootable, it is booted, which will enable us to fix this problem remotely
"OneFileLinux: A 20MB Alpine metadistro that fits into the ESP" https://news.ycombinator.com/item?id=40915199 :
- eaut/efistub generates signed boot images containing kernels and initrd ramdisks in one file to ensure the verification of the entire initial boot process.
Are there already ways to fail to the next kernel+initrd for uboot, [ventoy, yumi, rufus], grub, systemd-boot, or efistub?
For EFI you could probably set BootNext to something else early on, in combination with some restarting watchdog. GRUB can store state between boots https://www.gnu.org/software/grub/manual/grub/html_node/Envi... and has "default" and "fallback" vars.
So the bootloader would always advance nextboot= to the next boot entry, and a process after a successful boot must update nextboot=$thisbootentry?
Is there another way with fewer write() calls?
[deleted]
Scan HTML even faster with SIMD instructions (C++ and C#)
Nice. Is that microdata or RDFa?
- "SIMD-accelerated computer vision on a $2 microcontroller" (2024-06) https://news.ycombinator.com/item?id=40794553 ; https://github.com/topics/simd , SimSIMD, SIMDe
- From "Show HN: SimSIMD vs SciPy: How AVX-512 and SVE make SIMD nicer and ML 10x faster" https://news.ycombinator.com/item?id=37808036 :
> simdjson-feedstock: https://github.com/conda-forge/simdjson-feedstock/blob/main/...
> pysimdjson-feedstock: https://github.com/conda-forge/pysimdjson-feedstock/blob/mai...
- https://github.com/simd-lite/simd-json :
> [simd-lite/simd-json is a] Rust port of extremely fast simdjson JSON parser with serde compatibility
> Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.
Breakthrough bioplastic decomposes in 2 months
> A plastic alternative being developed in Denmark, made from barley starch and sugar beet waste, could soon be holding your leftovers.
> The best part about the University of Copenhagen-designed material is that it decomposes in nature in about two months, according to a lab summary. Standard plastic takes around 20 to 500 years (or more) to break down, according to the UN.
- "Researchers invent 100% biodegradable 'barley plastic'" https://news.ycombinator.com/item?id=40783777
Viewing Fast Vortex Motion in a Superconductor
> Nakamura and her colleagues have now developed their technique to measure vortex motion in two dimensions, this time in the iron-based superconductor FeSe0.5Te0.5, or FST. The researchers fabricated a 38-nm-thick film of FST and cooled it below its superconducting transition temperature (16.5 K). A coil generated a magnetic field, which produced vortices in the film. Additionally, the magnetic field induced a shielding current, which traced out a several-millimeter-diameter circle on the film.
> The researchers irradiated the film with a 20-picosecond infrared (0.3-terahertz) pulse and detected a second harmonic in the spectrum of the pulse that emerged from the sample, as in the previous experiments. But here they detected both polarizations of the emitted light, parallel and perpendicular to the incident pulse’s polarization.
> Using a roughly 1-mm-wide beam, they probed a region near the ring of the field-induced shielding current and then analyzed the transmitted waveforms in order to reconstruct the motion of a typical vortex residing in that location. The team found an oscillating, roughly parabolic trajectory rather than a straight line. This shape resulted from the interaction between the vortex, which is magnetic, and the shielding current. Discovering this motion was the most exciting part of the work, Nakamura says. “It felt like we were looking directly into the 2D motion of the vortex.”
"Picosecond trajectory of two-dimensional vortex motion in FeSe0.5Te0.5 visualized by terahertz second harmonic generation" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
"Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691
Automerge: A library of data structures for building collaborative applications
In practice most projects seem to use Yjs rather than Automerge. Is there an up-to-date comparison of the two? Has anyone here chosen Automerge over Yjs?
I’m quite familiar with both, having spent some time building a crdt library of my own. The authors of both projects are lovely humans. There are quite a lot of small differences that might matter to some people:
- Yjs is mostly made by a single author (Kevin Jahns). It does not store the full document history, but it does support arbitrarily many checkpoints which you can rewind a document to. Yjs is written in JavaScript. There’s a rust rewrite (Yrs) but it’s significantly slower than the JavaScript version for some reason. (5-10x slower last I checked).
- Automerge was started by Martin Kleppmann, Cambridge professor and author of Designing Data Intensive Applications. They have some funding now and as I understand it there are people working on it full time. To me it feels a bit more researchy - for example the team has been working on Byzantine fault tolerance features, rich text and other interesting but novel stuff. These days it’s written in rust, with wasm builds for the web. Automerge stores the entire history of a document, so unlike Yjs, deleted items are stored forever - with the costs and benefits that brings. Automerge is also significantly slower and less memory efficient than Yjs for large text documents. (It takes ~10 seconds & 200mb of ram to load a 100 page document in my tests.) I’m assured the team is working on optimisations; which is good because I would very much like to see more attention in that area.
They’re both good projects, but honestly both could use a lot of love. I’d love to have a “SQLite of local first software”. I think we’re close, but not quite there yet.
(There are some much faster test based CRDTs around if that’s your jam. Aside from my own work, Cola is also a very impressive and clean - and orders of magnitude faster than Yjs and automerge.)
> I’d love to have a “SQLite of local first software”
We have recently published a new research paper on replicating SQLite [1] in a local-first manner. We think it goes a step closer to that goal.
It looks very similar to Evolu (https://github.com/evoluhq/evolu)
cr-sqlite https://github.com/vlcn-io/cr-sqlite :
> Convergent, Replicated SQLite. Multi-writer and CRDT support for SQLite
From "SQLedge: Replicate Postgres to SQLite on the Edge" (2023) https://news.ycombinator.com/item?id=37063238#37067980 :
>> In technical terms: cr-sqlite adds multi-master replication and partition tolerance to SQLite via conflict free replicated data types (CRDTs) and/or causally ordered event logs
Mangrove trees are on the move, taking the tropics with them
Mangrove forest restoration > Mangrove loss and degradation: https://en.wikipedia.org/wiki/Mangrove_restoration#Mangrove_...
A new way to control the magnetic properties of rare earth elements
Article: "Optical control of 4f orbital state in rare-earth metals" (2024) https://www.science.org/doi/10.1126/sciadv.adk9522 :
> [Terbium, and] generally for the excitation of localized electronic states in correlated materials
Import and Export Markdown in Google Docs
Hey folks, I'm the engineer who implemented the new feature. Just clearing up some confusion.
A lot of you are noticing the preexisting automatic detection feature from 2022 [1], which I also worked on. That's NOT what this newly announced feature is. The new feature supports full import/export, but it's still rolling out so you're likely not seeing it yet!
Hope you like it once it reaches you :)
[1] https://workspaceupdates.googleblog.com/2022/03/compose-with...
Thanks!
Google Colab also supports Markdown input cells as "Text" with a preview.
Does this work with Google Sites?
How to create a Google Sites page from a Markdown doc, with MyST Markdown YAML front matter
NotebookLM can generate Python, LaTeX, and Markdown.
How to Markdown and Git diff on a Chromebook without containers or Inspect Element because [...]
How to auto-grade Jupyter Notebooks with Markdown prose, with OtterGrader
How to build a jupyter-book from .rst, MyST Markdown .md, and .ipynb jupyter/nbformat notebooks containing MyST Markdown
Laser nanofabrication inside silicon with spatial beam modulation and seeding
"Laser nanofabrication inside silicon with spatial beam modulation and anisotropic seeding" (2024) https://www.nature.com/articles/s41467-024-49303-z :
> Abstract: Nanofabrication in silicon, arguably the most important material for modern technology, has been limited exclusively to its surface. Existing lithography methods cannot penetrate the wafer surface without altering it, whereas emerging laser-based subsurface or in-chip fabrication remains at greater than 1 μm resolution. In addition, available methods do not allow positioning or modulation with sub-micron precision deep inside the wafer. The fundamental difficulty of breaking these dimensional barriers is two-fold, i.e., complex nonlinear effects inside the wafer and the inherent diffraction limit for laser light. Here, we overcome these challenges by exploiting spatially-modulated laser beams and anisotropic feedback from preformed subsurface structures, to establish controlled nanofabrication capability inside silicon. We demonstrate buried nanostructures of feature sizes down to 100 ± 20 nm, with subwavelength and multi-dimensional control; thereby improving the state-of-the-art by an order-of-magnitude. In order to showcase the emerging capabilities, we fabricate nanophotonics elements deep inside Si, exemplified by nanogratings with record diffraction efficiency and spectral control. The reported advance is an important step towards 3D nanophotonics systems, micro/nanofluidics, and 3D electronic-photonic integrated systems.
- "Researchers achieve unprecedented nanostructuring inside silicon" https://phys.org/news/2024-07-unprecedented-nanostructuring-...
Film 'built atom-by-atom' moves electrons 7x faster than semiconductors
"Quantum microscopy study makes electrons visible in slow motion" https://phys.org/news/2024-07-quantum-microscopy-electrons-v... .. https://news.ycombinator.com/item?id=40981039 :
> If you change a few atoms at the atomic level, the macroscopic properties remain unchanged. For example, metals modified in this way are still electrically conductive, whereas insulators are not.
> However, the situation is different in more advanced materials, which can only be produced in the laboratory—minimal changes at the atomic level cause new macroscopic behavior. For example, some of these materials suddenly change from insulators to superconductors, i.e., they conduct electricity without heat loss.
> These changes [to superconductivity, for example] can happen extremely quickly, within picoseconds, as they influence the movement of electrons through the material directly at the atomic scale. A picosecond is extremely short, just a trillionth of a second. It is in the same proportion to the blink of an eye as the blink of an eye is to a period of more than 3,000 years.
> [...] The researchers were able to optimize their microscope in such a way that it repeats the experiment 41 million times per second and thus achieves a particularly high signal quality.
> The [crystalline ternary tetradymite] film — measuring just 100 nanometers wide, or about one-thousandth of the thickness of a human hair — was created through a process called molecular beam epitaxy, which involves precisely controlling beams of molecules to build a material atom-by-atom. This process allows materials to be constructed with minimal flaws or defects, enabling greater electron mobility, a measure of how easily electrons move through a material under an electric field.
> When the scientists applied an electric current to the film, they recorded electrons moving at record-breaking speeds of 10,000 centimeters squared per volt-second (cm^2/V-s). By comparison, electrons typically move at about 1,400 cm^2/V-s in standard silicon semiconductors, and considerably slower in traditional copper wiring.
MBE: Molecular-Beam Epitaxy: https://en.wikipedia.org/wiki/Molecular-beam_epitaxy
Quantum microscopy study makes electrons visible in slow motion
"Terahertz spectroscopy of collective charge density wave dynamics at the atomic scale" (2024) https://www.nature.com/articles/s41567-024-02552-7
Tlsd: Generate (message) sequence diagrams from TLA+ state traces
this looks pretty sweet. tla+ looks scary and mathy at first, but it's really a pretty simple concept: define some state machines, explore the state space, see if you run into trouble. with this you get a nice classic sequence diagram to show how things went down.
TLAplus: https://en.wikipedia.org/wiki/TLA%2B
A learnxinyminutes for TLA+ might be helpful: https://learnxinyminutes.com/
awesome-tlaplus > Books, (University) courses teaching (with) TLA+: https://github.com/tlaplus/awesome-tlaplus#books
Have you looked at using mermaidjs instead of SVG as an output? It might translate easier and plug into other solutions easier.
blockdiag > seqdiag is another syntax for Unicode sequence diagrams, optionally in ReST or MyST in Sphinx docs: http://blockdiag.com/en/seqdiag/examples.html#edge-types
blockdiag > nwdiag > rackdiag does server rack charts: http://blockdiag.com/en/nwdiag/
Otherwise mermaidjs probably has advantages including Jupyter Notebook support.
E.g. Gephi supports JSON of some sort. Supported graph formats: https://gephi.org/users/supported-graph-formats/
More graph and edge layout options might help with larger traces
yEd has many graph layout algorithms and parameters and supports GraphML XML; yEd: https://en.wikipedia.org/wiki/YEd
MermaidJS docs > Syntax > sequenceDiagram: https://mermaid.js.org/syntax/sequenceDiagram.html
It doesn't seem any of these would be naturally be able to express the example case https://raw.githubusercontent.com/eras/tlsd/master/doc/seque... . But maybe it's possible?
In addition there's also the case of messages that are never processed, but I suppose that could be its own "never handled" box.
Lasers Could Solve the Plastic Problem
"Light-driven C–H activation mediated by 2D transition metal dichalcogenides" (2024) https://www.nature.com/articles/s41467-024-49783-z
> The researchers used low-power light to break the chemical bonding of the plastics and create new chemical bonds that turned the materials into luminescent carbon dots. Carbon-based nanomaterials are in high demand because of their many capabilities, and these dots could potentially be used as memory storage devices in next-generation computer devices. [...]
> Potential for Broader Applications: The specific reaction is called C-H activation, where carbon-hydrogen bonds in an organic molecule are selectively broken and transformed into a new chemical bond. In this research, the two-dimensional materials catalyzed this reaction that led to hydrogen molecules morphing into gas. That cleared the way for carbon molecules to bond with each other to form the information-storing dots.
> Further research and development are needed to optimize the light-driven C-H activation process and scale it up for industrial applications. However, this study represents a significant step forward in the quest for sustainable solutions to plastic waste management.
Nanorobot with hidden weapon kills cancer cells
Looks like gold nanoparticles kill glioblastoma colon cancer: https://news.ycombinator.com/item?id=40819864 :
>> He emphasizes, "Any scientist can already use our model at the design stage of their own research to instantly narrow down the number of nanoparticle variants requiring experimental verification."
>> "Modeling Absorption Dynamics of Differently Shaped Gold Glioblastoma and Colon Cells Based on Refractive Index Distribution in Holotomographic Imaging" (2024) https://onlinelibrary.wiley.com/doi/10.1002/smll.202400778
> Holotomography: https://en.wikipedia.org/wiki/Holotomography
>>> Could part of the cancer-killing [200nm nanoparticle gold stars] be more magnetic than Gold (Au), for targeting treatment?
Magnets, nano robots, NIRS targeting,
"Whole-body magnetic resonance imaging at 0.05 Tesla" [1800W] https://www.science.org/doi/10.1126/science.adm7168 .. https://news.ycombinator.com/item?id=40335170
Article: ”A DNA Robotic Switch with Regulated Autonomous Display of Cytotoxic Ligand Nanopatterns” (2024) https://www.nature.com/articles/s41565-024-01676-4
Show HN: Kaskade – A text user interface for Kafka
Celery Flower TUI doesn't yet support Kafka in Python, though Kombu (the protocol wrapper for celery) does. It looks like Kaskade has non-kombu support for consumers and ? https://github.com/celery/celery/issues/7674#issuecomment-12... .. https://news.ycombinator.com/item?id=40842365
(Edit)
Kaskade models.py, consumer.py https://github.com/sauljabin/kaskade/blob/main/kaskade/model...
kombu/transport/confluentkafka.py: https://github.com/celery/kombu/blob/main/kombu/transport/co...
confluent-kafka-python wraps librdkafka with binary wheels: https://github.com/confluentinc/confluent-kafka-python
librdkafka: https://github.com/edenhill/librdkafka
Researchers demonstrate how to build 'time-traveling' quantum sensors
"Agnostic Phase Estimation" (2024) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13... :
> Abstract: The goal of quantum metrology is to improve measurements’ sensitivities by harnessing quantum resources. Metrologists often aim to maximize the quantum Fisher information, which bounds the measurement setup’s sensitivity. In studies of fundamental limits on metrology, a paradigmatic setup features a qubit (spin-half system) subject to an unknown rotation. One obtains the maximal quantum Fisher information about the rotation if the spin begins in a state that maximizes the variance of the rotation-inducing operator. If the rotation axis is unknown, however, no optimal single-qubit sensor can be prepared. Inspired by simulations of closed timelike curves, we circumvent this limitation. We obtain the maximum quantum Fisher information about a rotation angle, regardless of the unknown rotation axis. To achieve this result, we initially entangle the probe qubit with an ancilla qubit. Then, we measure the pair in an entangled basis, obtaining more information about the rotation angle than any single-qubit sensor can achieve. We demonstrate this metrological advantage using a two-qubit superconducting quantum processor. Our measurement approach achieves a quantum advantage, outperforming every entanglement-free strategy.
- "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 .. https://news.ycombinator.com/item?id=40396171
- "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... .. https://news.ycombinator.com/item?id=40492160 re photonic phase
Free-threaded CPython is ready to experiment with
Will there be an effort to encourage devs to add support for free-threaded Python like for Python 3 [1] and for Wheels [2]?
Is there a cibuildwheel / CI check for free-threaded Python support?
Is there already a reason not to have Platform compatibility tags for free-threaded cpython support? https://packaging.python.org/en/latest/specifications/platfo...
Is there a hame - a hashtaggable name - for this feature to help devs find resources to help add support?
Can an LLM almost port in support for free-threading in Python, and how should we expect the tests to be insufficient?
"Porting Extension Modules to Support Free-Threading" https://py-free-threading.github.io/porting/
[1] "Python 3 "Wall of Shame" Becomes "Wall of Superpowers" Today" https://news.ycombinator.com/item?id=4907755
(Edit)
Compatibility status tracking: https://py-free-threading.github.io/tracking/
(2021) https://news.ycombinator.com/item?id=29005573#29009072 :
python-feedstock / recipe / meta.yml: https://github.com/conda-forge/python-feedstock/blob/master/...
pypy-meta-feedstock can be installed in the same env as python-feedstock; https://github.com/conda-forge/pypy-meta-feedstock/blob/main...
Install commands from https://py-free-threading.github.io/installing_cpython/ :
sudo dnf install python3.13-freethreading
sudo add-apt-repository ppa:deadsnakes
sudo apt-get update
sudo apt-get install python3.13-nogil
conda create -n nogil -c defaults -c ad-testing/label/py313_nogil python=3.13
mamba create -n nogil -c defaults -c ad-testing/label/py313_nogil python=3.13
TODO: conda-forge ?, pixiStudent uses black soldier flies to grow pea plants in simulated Martian soil
Hydroponics shows that it's possible to grow most plants in water, no soil at all. So soil is by no means a requirement for plants to grow.
Source: Several Kratky method veggies growing on my balcony.
But don’t you also need to bring fertilizer with you? Also from what I remember you can’t really grow plans with wooded stalks, or at least not as easily as ones with all green stalks.
You would need some, but alot less than you might think. For my plants, about 3-5ml per litre of water is all thats needed.
Is there a KNF/JADAM hydro / aqua solution that would work?
NPK: Nitrogen, Phosphorous, Potassium
From https://www.space.com/16903-mars-atmosphere-climate-weather.... :
> According to ESA, Mars' atmosphere is composed of 95.32% carbon dioxide, 2.7% nitrogen, 1.6% argon and 0.13% oxygen. The atmospheric pressure at the surface is 6.35 mbar which is over 100 times less Earth's. Humans therefore cannot breathe Martian air.
Lichen might grow in Martian soil and atmosphere.
"‘Fixed’ nitrogen found in martian soil" (2015) https://www.science.org/doi/10.1126/science.347.6229.1403-a :
> Now, in a study published this week in the Proceedings of the National Academy of Sciences, the NASA Curiosity rover team reports detecting nitrates on Mars. ; nitric oxides
/? do plants consume protein? https://www.google.com/search?q=do+plants+consume+protein
Plants can use protein as a source of nitrogen.
"Plants can use protein as a nitrogen source without assistance from other organisms" (2008) https://www.pnas.org/doi/abs/10.1073/pnas.0712078105
"Solein® transforms ancient microbes into the future of food" https://www.solein.com/blog/solein-transforms-ancient-microb... :
> In Solein's case, the bacteria oxidise hydrogen – a process involving the removal of electrons from the hydrogen molecules. This reaction releases energy, which the bacteria use to fix carbon dioxide, transforming it into organic compounds, including proteins.
"Scientists Just Accidentally Discovered a Process That Turns CO2 Directly Into Ethanol" (2024) https://www.sciencealert.com/scientists-just-accidentally-di... :
> [...] Rondinone [@ORNL] and his colleagues had put together a catalyst using carbon, copper, and nitrogen, by embedding copper nanoparticles into nitrogen-laced carbon spikes measuring just 50-80 nanometres tall. (1 nanometre = one-millionth of a millimetre.)
> When they applied an electric current of just 1.2 volts, the catalyst converted a solution of CO2 dissolved in water into ethanol, with a yield of 63 percent.
Algae produce amino acids and proteins. Algae also use CO2 and Hydrogen to produce Omega-3 PUFAs, which are precursors to endocannabinoids.
FWIU Hydrogen Peroxide can flush aquarium tanks and Kratky systems. From watching YouTube, IDK about pool noodle polyethylene foam in the sun (solar radiation) instead of rockwool though
Narwhals and scikit-Lego came together to achieve dataframe-agnosticism
Narwhals: https://narwhals-dev.github.io/narwhals/ :
> Extremely lightweight compatibility layer between [pandas, Polars, cuDF, Modin]
Lancedb/lance works with [Pandas, DuckDB, Polars, Pyarrow,]; https://github.com/lancedb/lance
It may already be supported through pandas?
PHET Simulations: Balancing Act: LHS, Relation Operator, RHS
This PHET simulation says it only teaches ratios ("6.RP.A.3",), but I think it also teaches algebra!
The balance beam is not balanced when the equality relation does not hold: there is then an inequality relation between the masses given the position of the fulcrum.
Adding the same amount of mass to each side does not change whether the balance remains balanced (and subtraction is the inverse of addition, and integer multiplication is repeated addition).
Given a different amount of mass on each side of the balance, what is the mathematical analogue of balancing the equation by moving the fulcrum just?
Pyxel: A retro game engine for Python
These retro game engines are so much fun. Takes me back to the days of mode 13h.
Pyxel is (I think) unique among Python game engines in that it can run on the web.
Some others I’ve played with are PyGame and Arcade, mostly geared toward 2D, but you can see some impressive 3D examples on the youtube channel DaFluffyPotato.
Ursina is another that’s more 3D, fairly expressive, and runs fairly well for being Python.
I do feel like I’m going to be forced to cross over into something more powerful to build a real game though. Either Godot or Unity.
Panda3D [0] (which is what Ursina uses under the hood) and Pygame can both run on the web due to PygBag [1].
Truth be told you can build a game on any tool, obviously the tool you choose will help shape the game you make - but it's more about keeping at it then the underlying technology.
Personally I really like Panda3D and feel like it doesn't get enough attention. It's scene graph [3] is interesting because it splits it into nodes and nodepaths. A node is what gets stored in the graph but you manipulate them using the nodepaths which simplifies programming.
It also has a really amazing aync task manager [2] which makes game programming no problem. You can just pause in a task (or even a event) which sounds simple but you'd be suprised by who many engines won't let you do that.
It also has a multiplayer solution [6] that was battle tested for 2 mmos.
Finally I really like it's interval [4] system which is like Unreal or Unity's timeline but code based.
It's also on pypi [5] so super easy to install.
[1] https://pypi.org/project/pygbag/
[2] https://docs.panda3d.org/1.10/python/programming/tasks-and-e...
[3] https://docs.panda3d.org/1.10/python/more-resources/cheat-sh...
[4] https://docs.panda3d.org/1.10/python/programming/intervals/i...
[5] https://pypi.org/project/Panda3D/
[6] https://docs.panda3d.org/1.10/python/programming/networking/...
harfang-wasm is a fork of pygbag.
harfang-wasm: https://github.com/harfang3d/harfang-wasm
pygbag: https://github.com/pygame-web/pygbag
https://news.ycombinator.com/item?id=38772400 :
> FWIU e.g. panda3d does not have a react or rxpy-like API, but probably does have a component tree model?
Is there a react-like api over panda3d, or are there only traditional events?
The redux DevTools extension also works with various non-react+redux JS frameworks.
Manim has a useful API for teaching. Is there a good way to do panda3d with a manim-like interface, for scripting instructional design? https://github.com/ManimCommunity/manim/issues/3362#issuecom...
I haven't used React so I'm not sure but you could check their discourse. Someone may have created one.
I believe using intervals and async tasks/events you could do something similar to Manim.
Tunable quantum emitters on large-scale foundry silicon photonics
"Tunable quantum emitters on large-scale foundry silicon photonics" (2024) https://www.nature.com/articles/s41467-024-50208-0 :
> Abstract: Controlling large-scale many-body quantum systems at the level of single photons and single atomic systems is a central goal in quantum information science and technology. Intensive research and development has propelled foundry-based silicon-on-insulator photonic integrated circuits to a leading platform for large-scale optical control with individual mode programmability. However, integrating atomic quantum systems with single-emitter tunability remains an open challenge. Here, we overcome this barrier through the hybrid integration of multiple InAs/InP microchiplets containing high-brightness infrared semiconductor quantum dot single photon emitters into advanced silicon-on-insulator photonic integrated circuits fabricated in a 300 mm foundry process. With this platform, we achieve single-photon emission via resonance fluorescence and scalable emission wavelength tunability. The combined control of photonic and quantum systems opens the door to programmable quantum information processors manufactured in leading semiconductor foundries.
Show HN: I built GithubReader.org for better learning experience
Hey Guys, I created GithubReader.org (open-source, non-profit, most of the code is written by ChatGPT).
My motivation:
I wanted an easy way to keep a list of repositories I want to learn from and access them quickly. So, I built a search page where I can find repositories by name, like Google.
Also, I wanted a nicer way to read Markdown files on my laptop without the GitHub UI, like reading a book. So, I made a custom UI with a back button that takes you to the previous state without jumping pages.
The markdown links are sharable, e.g. https://githubreader.org/render?url=https%3A%2F%2Fgithub.com...
Repository URL: https://github.com/agdaily/github-reader/
The auto complete search is nice. I just found a repo (for linked-data) I wouldn't have otherwise.
ENH: search github labels with query patterns
glad you liked it. thanks for the suggestion, added it here https://github.com/agdaily/github-reader/issues/3
Show HN: A modern Jupyter client for macOS
I love Jupyter – it's how I learned to code back when I was working as a scientist. But I was always frustrated that there wasn't a simple and elegant app that I could use with my Mac. I made do by wrapping JupyterLab in a chrome app, and then more recently switching to VS Code to make use of Copilot. I've always craved a more focused and lighter-weight experience when working in a notebook. That's why I created Satyrn.
It starts up really fast (faster time-to-execution than VS Code or JupyterLab), you can launch notebooks right from the Finder, and the design is super minimalist. It's got an OpenAI integration (use your own API key) for multi-cell generation with your notebook as context (I'll add other LLMs soon). And many more useful features like a virtual environment management UI, Black code formatting, and easy image/table copy buttons.
Full disclosure: it's built with Electron. I originally wrote it in Swift but couldn't get the editor experience to where I wanted it. Now it supports autocomplete, multi-cursor editing, and moving the cursor between cells just like you'd expect from JupyterLab or VS Code.
Satyrn sits on top of the jupyter-server, so it works with all your existing python kernels, Jupyter configuration, and ipynb files. It only works with local files at the moment, but I'm planning to extend it to support remote servers as well.
I'm an indie developer, and I will try to monetize at some point, but it's free while in alpha. If you're interested, please try it out!
I'd love your feedback in the comments, or you can contact me at jack-at-satyrn-dot-app.
Cool!
Surprised to hear you started with a native UI and pivoted to electron. What was the major blocker there?
I recently got frustrated with OpenSCAD and decided to try CadQuery and Build123d. The modeling backend is a big step forward, but the GUI is not nearly as good as OpenSCAD. I managed to get it working via VSCode with a plugin, but I’m dreaming of embedding everything in a dedicated MacOS app so I can jump into CAD work without hacking through dev setup.
There aren't many great production-ready open-source frameworks for code-editor components in Swift. I assessed quite a few but found that the feature completeness was far from what I needed. I tried to fork [CodeEditSourceEditor](https://github.com/CodeEditApp/CodeEditSourceEditor) and add the extra features I wanted, but I think it would have taken me 6-12 months to get it to an acceptable state, meanwhile not spending any time focusing on the rest of the product experience.
I decided to play around with Typescript and Electron over a weekend and ended up getting a really solid prototype so I made the heart wrenching decision to move over.
I'm messing around with writing my own text editor component in Swift now, but it's quite a big endeavour to get the standard expected for a production ready product.
I'm assuming a pure-swift CAD UI would be equally difficult. Would be really cool to see that tho.
FWIW I've had good experiences with Tauri so far. It's great being able to call back easily into Rust code and I haven't had any real issues with the wrapped web view.
At this point I'd start with Tauri first and only switch to Electron only if I really need something only Chrome supports or if I need to support Linux. I guess the webview story there is not so good.
Tauri supports linux.
tauri-apps/tauri: https://github.com/tauri-apps/tauri :
> The user interface in Tauri apps currently leverages tao as a window handling library on macOS, Windows, Linux, Android and iOS. To render your application, Tauri uses WRY, a library which provides a unified interface to the system webview, leveraging WKWebView on macOS & iOS, WebView2 on Windows, WebKitGTK on Linux and Android System WebView on Android. ... Tauri GitHub action: https://tauri.app/v1/guides/building/cross-platform/#tauri-g...
WebView: https://en.wikipedia.org/wiki/WebView
CEF: Chromium Embedded Framework > External Projects: https://github.com/chromiumembedded/cef#external-projects : cefglue, cefpython,: https://github.com/cztomczak/cefpython
(edit)
tauri-apps/wry > Bundle Chromium Renderer; Chromium WebView like WebView2 (Edge (Chromium)) https://github.com/tauri-apps/wry/issues/1064#issuecomment-2...
Gravity alters the dynamics of a phase transition
> An experiment uncovers the role played by gravity in Ostwald ripening, a spontaneous thermodynamic process responsible for many effects such as the recrystallization of ice cream.
Ostwald ripening: https://en.wikipedia.org/wiki/Ostwald_ripening
And yet another parameter to phase transition diagrams:
Mpemba effect: https://en.wikipedia.org/wiki/Mpemba_effect :
> The Mpemba effect is the name given to the observation that a liquid (typically water) which is initially hot can freeze faster than the same liquid which begins cold, under otherwise similar condition
From "Exploring Quantum Mpemba Effects" (2024) https://physics.aps.org/articles/v17/105 :
> Under certain conditions, warm water can freeze faster than cold water. This phenomenon was named the Mpemba effect after Erasto Mpemba, a Tanzanian high schooler who described the effect in the 1960s [1]. The phenomenon has sparked intense debates for more than two millennia and continues to do so [2]. Similar processes, in which a system relaxes to equilibrium more quickly if it is initially further away from equilibrium, are being intensely explored in the microscopic world. Now three research teams provide distinct perspectives on quantum versions of Mpemba-like effects, emphasizing the impact of strong interparticle correlations, minuscule quantum fluctuations, and initial conditions on these relaxation processes [3–5]. The teams’ findings advance quantum thermodynamics and have potential implications for technologies, ranging from information processors to engines, powered by quantum resources.
Revealing the complex phases of rhombohedral trilayer graphene
> Rhombohedral graphene is an emerging material with a rich correlated-electron phenomenology, including superconductivity. The magnetism of symmetry-broken trilayer graphene has now been explored,
"Revealing the complex phases of rhombohedral trilayer graphene" (2024) https://www.nature.com/articles/s41567-024-02561-6
"Physicists create five-lane superhighway for electrons" https://news.ycombinator.com/item?id=40417929 :
> [ rhombohedral ]
> "Correlated insulator and Chern insulators in pentalayer rhombohedral-stacked graphene" (2023) https://www.nature.com/articles/s41565-023-01520-1 :
> [...] Our results establish rhombohedral multilayer graphene as a suitable system for exploring intertwined electron correlation and topology phenomena in natural graphitic materials without the need for moiré superlattice engineering.
And there's semiconductivity in graphene, too:
- "Researchers create first functional graphene semiconductor" (2024) https://news.ycombinator.com/item?id=38869171
- "Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" https://news.ycombinator.com/item?id=40719725
Electric Vehicle Batteries Surprising New Source of 'Forever Chemical' Pollution
This is more fear mongering, a giant nothing burger. There will be huge advantages to recycling these batteries as the technologies come out, and dealing with the PFAS will be on the recycler.
> There will be
> will be on the recycler
Given the current state of recycling in general, I doubt this is a nothing burger.
Lead-acid batteries have very good recycling rates.
Yes, that is because in many places it is a law, plus recycling them is a money making industry.
I predict that it will be the same for lithium in the near future.
Based on what? Where are the economics to back that feeling up?
The first pilot plants are being built, e.g. https://electrek.co/2023/03/02/tesla-cofounders-redwood-show...
Redwood Materials: https://en.wikipedia.org/wiki/Redwood_Materials :
> In March 2023 Redwood claimed to have recovered more than 95% of important metals (incl. lithium, cobalt, nickel and copper) from 500,000 lb (230,000 kg) of old NiMH and Li-Ion packs. [10]
That's without $10K humanoid robots sorting through electronics and removing batteries IIUC.
Recovering rare earths from electronics waste should be even more profitable now. Just look at Copenhill; how many inputs and how many outputs?
"Turning waste into gold" (2024) https://www.sciencedaily.com/releases/2024/02/240229124612.h... :
> Researchers have recovered gold from electronic waste. Their highly sustainable new method is based on a protein fibril sponge, which the scientists derive from whey, a food industry byproduct. ETH Zurich researchers have recovered the precious metal from electronic waste.
"[Star-shaped] Gold nanoparticles kill cancer – but not as thought" (2024) https://news.ycombinator.com/item?id=40819864
Butter made from CO2 could pave the way for food without farming
I cant find any info.
Last time I read a similar "groundbreaking" resullt, they were using bacteria instead of cows.
With cows, you have CO2+sunlight -> grass -> butter
With bacteria it's a similar process, but you skip the grass if they are photosynthetic), or you just replace the grass with another intermediate step.
Violation of Bell inequality by photon scattering on a two-level emitter
"Violation of Bell inequality by photon scattering on a two-level emitter" (2024) https://www.nature.com/articles/s41567-024-02543-8
- "Quantum dot photon emitters violate Bell inequality in new study" https://phys.org/news/2024-07-quantum-dot-photon-emitters-vi...
Bell's Theorem: https://en.wikipedia.org/wiki/Bell%27s_theorem
From "Scientists show that there is indeed an 'entropy' of quantum entanglement" (2024) https://news.ycombinator.com/item?id=40396001#40396211 :
> IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)
"Reversibility of quantum resources through probabilistic protocols" (2024) https://www.nature.com/articles/s41467-024-47243-2
"Thermodynamics of Computations with Absolute Irreversibility, Unidirectional Transitions, and Stochastic Computation Times" (2024) https://journals.aps.org/prx/abstract/10.1103/PhysRevX.14.02...
End-to-end congestion control cannot avoid latency spikes (2022)
- "MOOC: Reducing Internet Latency: Why and How" (2023) https://news.ycombinator.com/item?id=37285586#37285733 ; sqm-autorate, netperf, iperf, flent, dslreports_8dn
Bufferbloat > Solutions and mitigations: https://en.wikipedia.org/wiki/Bufferbloat#Solutions_and_miti...
How I've Learned to Live with a Nonexistent Working Memory
Can't read without an account.
Also "working memory" is short-term storage, like registers, the details of what you're thinking about right now. Not memory of past events, like your wedding day.
Yeah I have no idea what definition the OP had in mind. Nonexistent working memory must be devastating:
Working memory is a cognitive system with a limited capacity that can hold information temporarily.
It is important for reasoning and the guidance of decision-making and behavior.
https://en.m.wikipedia.org/wiki/Working_memoryMemory improvement > Strategies: https://en.wikipedia.org/wiki/Memory_improvement#Strategies
Hippocampal prosthesis, Cortical implant: https://en.wikipedia.org/wiki/Hippocampal_prosthesis
Cortical implant: https://en.wikipedia.org/wiki/Cortical_implant
Digital immortality: https://en.wikipedia.org/wiki/Digital_immortality
External memory: https://en.wikipedia.org/wiki/External_memory_(psychology)
Interesting. I’m bipolar and I can absolutely say core training does not work for me. My working memory has a hard limit on the number of tokens it can store.
What does work is encoding complex ideas into longer term memory so they become one token instead of many. I typically shove them into visual memory.
The best way I can explain it is a terrain map. The map has trees which are flow charts, lakes which are distillations of many ideas into a small pools of concepts. Valleys of questions. Eventually everything is in my “mind’s eye” and I can figuratively fly over the data.
Now take that image and remove the visuals. My brain “sees” the map without needing to describe it. It exists as pure, untranslated thought. Intermediate steps, like language, would only get in the way.
Linguistic relativity; does language construct cognition or vice-versa? https://en.wikipedia.org/wiki/Linguistic_relativity
"List of concept- and mind-mapping software" https://en.wikipedia.org/wiki/List_of_concept-_and_mind-mapp...
- markmap: markdown + mindmap https://markmap.js.org/ , markmap-vs-code
- vim-voom: https://vim-voom.github.io/
- org-mode, nvim-org-mode
- vscode markdown outline: https://code.visualstudio.com/docs/languages/markdown#_docum...
Many a time back in the day I wondered why Word's outline editor mode imposed stylesheet styles on node levels at all.
Not a fan of the Sapir–Whorf hypothesis. For myself my brain clearly functions at a lower level of abstraction. Language is an imperfect translation of my actual ideas.
If the hypothesis has any truth at all, it cannot be universal. It doesn’t apply to me.
If someone has thoughts they cannot express in words, are they truly limited by the language they speak?
OneFileLinux: A 20MB Alpine metadistro that fits into the ESP
I made this exact thing like 2 years ago except with a custom busybox based environment, huh
It worked by building the system into the initramfs, then using a kernel feature to basically bake the initramfs into the kernel image, and then using efistub to make the kernel bootable
Efistub
Kernel docs > admin-guide/efistub: https://docs.kernel.org/admin-guide/efi-stub.html :
> Like most boot loaders, the EFI stub allows the user to specify multiple initrd files using the “initrd=” option. This is the only EFI stub-specific command line parameter, everything else is passed to the kernel when it boots
Arch Wiki > EFISTUB: https://wiki.archlinux.org/title/EFISTUB :
> The option is enabled by default on Arch Linux kernels, or if compiling the kernel one can activate it by setting CONFIG_EFI_STUB=y in the Kernel configuration.
eaut/efistub is a script for managing signed efistubs: https://github.com/eaut/efistub
... Systemrescuecd can be PXE booted (netbooted) from tftpd managed by Cobbler or Foreman.
From "Booting Linux off of Google Drive" https://news.ycombinator.com/item?id=40853770#40859485 re: signing and iPXE
Ventoy boots via grub/efi but there are blobs: https://news.ycombinator.com/item?id=40619822#40620279
"Ask HN: Where can I find a primer on how computers boot?" https://news.ycombinator.com/item?id=35230852
systemd-boot and grub support booting to efistubs
"Migrate from systemd-boot to EFISTUB" https://www.reddit.com/r/archlinux/comments/14jt8yv/migrate_... :
> You don't "disable" systemd-boot, you just don't run it. Or you can delete the boot entry for it with efibootmgr if you want. You also don't need to "set up" EFISTUB, the stub is already built into the Arch kernels, so they're already bootable. So all you need to do is make a boot entry in your UEFI using efibootmgr with the correct kernel parameters, most likely the same ones you used for systemd-boot.
Fedora > Changes/Unified_Kernel_Support_Phase_2 > Switch_an_existing_install_to_use_UKIs: https://fedoraproject.org/wiki/Changes/Unified_Kernel_Suppor... :
dnf install virt-firmware uki-direct kernel-uki-virt
kernel-bootcfg --show
https://github.com/spxak1/weywot/blob/main/guides/fedora_sys... : sudo dracut -fvM --uefi --hostonly-cmdline --kernel-cmdline "root=UUID=b6b8fa59-92cc-4d03-8d8f-d66dab76d433 ro rootflags=subvol=root resume=UUID=fb661671-97dc-45db-b720-062acdcf095e rhgb quiet mitigations =off"
- "No more boot loader: Please use the kernel instead" https://news.ycombinator.com/item?id=40907933Gentoo wiki > EFI_stub https://wiki.gentoo.org/wiki/EFI_stub :
> It is recommended to always have a backup kernel. If a bootmanager like grub is already installed, it should not be uninstalled, because grub can boot a stub kernel just like a normal kernel. A second possibility is to work with an additional UEFI entry. Before installing a new kernel, the current one can be copied from /efi/EFI/example/ to /efi/EFI/backup. [And then efibootmgr]
Circularly Polarized Light Unlocks Ultrafast Magnetic Storage and Spintronics
"Ultrafast opto-magnetic effects in the extreme ultraviolet spectral range" (2024) https://www.nature.com/articles/s42005-024-01686-7 :
> Coherent light-matter interactions mediated by opto-magnetic phenomena like the inverse Faraday effect (IFE) are expected to provide a non-thermal pathway for ultrafast manipulation of magnetism on timescales as short as the excitation pulse itself. As the IFE scales with the spin-orbit coupling strength of the involved electronic states, photo-exciting the strongly spin-orbit coupled core-level electrons in magnetic materials appears as an appealing method to transiently generate large opto-magnetic moments. Here, we investigate this scenario in a ferrimagnetic GdFeCo alloy by using intense and circularly polarized pulses of extreme ultraviolet radiation. Our results reveal ultrafast and strong helicity-dependent magnetic effects which are in line with the characteristic fingerprints of an IFE, corroborated by ab initio opto-magnetic IFE theory and atomistic spin dynamics simulations.
New shapes of photons open doors to advanced optical technologies
"Symmetries and wave functions of photons confined in three-dimensional photonic band gap superlattices" (2024) https://journals.aps.org/prb/abstract/10.1103/PhysRevB.109.2...
PostgreSQL and UUID as Primary Key
The best advice I can give you is to use bigserial for B-tree friendly primary keys and consider a string-encoded UUID as one of your external record locator options. Consider other simple options like PNR-style (airline booking) locators first, especially if nontechnical users will quote them. It may even be OK if they’re reused every few years. Do not mix PK types within the schema for a service or application, especially a line-of-business application. Use UUIDv7 only as an identifier for data that is inherently timecoded, otherwise it leaks information (even if timeshifted). Do not use hashids - they have no cryptographic qualities and are less friendly to everyday humans than the integers they represent; you may as well just use the sequence ID. As for the encoding, do not use base64 or other hyphenated alphabets, nor any identifier scheme that can produce a leading ‘0’ (zero) or ‘+’ (plus) when encoded (for the day your stuff is pasted via Excel).
Generally, the principles of separation of concerns and mechanical sympathy should be top of mind when designing a lasting and purposeful database schema.
Finally, since folks often say “I like stripe’s typed random IDs” in these kind of threads: Stripe are lying when they say their IDs are random. They have some random parts but when analyzed in sets, a large chunk of the binary layout is clearly metadata, including embedded timestamps, shard and reference keys, and versioning, in varying combinations depending on the service. I estimate they typically have 48-64 bits of randomness. That’s still plenty for most systems; you can do the same. Personally I am very fond of base58-encoded AES-encrypted bigserial+HMAC locators with a leading type prefix and a trailing metadata digit, and you can in a pinch even do this inside the database with plv8.
For postgres, you want to use “bigint generated always as identity” instead of bigserial.
Why “bigint generated always as identity” instead of bigserial, instead of Postgres' uuid data type?
Postgres' UUID datatype: https://www.postgresql.org/docs/current/datatype-uuid.html#D...
django.db.models.fields.UUIDField: https://docs.djangoproject.com/en/5.0/ref/models/fields/#uui... :
> class UUIDField: A field for storing universally unique identifiers. Uses Python’s UUID class. When used on PostgreSQL and MariaDB 10.7+, this stores in a uuid datatype, otherwise in a char(32)
> [...] Lookups on PostgreSQL and MariaDB 10.7+: Using iexact, contains, icontains, startswith, istartswith, endswith, or iendswith lookups on PostgreSQL don’t work for values without hyphens, because PostgreSQL and MariaDB 10.7+ store them in a hyphenated uuid datatype type.
From the sqlalachemy.types.Uuid docs: https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqla... :
> Represent a database agnostic UUID datatype.
> For backends that have no “native” UUID datatype, the value will make use of CHAR(32) and store the UUID as a 32-character alphanumeric hex string.
> For backends which are known to support UUID directly or a similar uuid-storing datatype such as SQL Server’s UNIQUEIDENTIFIER, a “native” mode enabled by default allows these types will be used on those backends.
> In its default mode of use, the Uuid datatype expects Python uuid objects, from the Python uuid module
From the docs for the uuid Python module: https://docs.python.org/3/library/uuid.html :
> class uuid.SafeUUID: Added in version 3.7.
> safe: The UUID was generated by the platform in a multiprocessing-safe way
And there's not yet a uuid.uuid7() in the uuid Python module.
UUIDv7 leaks timing information ( https://news.ycombinator.com/item?id=40886496 ); which is ironic because uuids are usually used to avoid the "guess an autoincrement integer key" issue
Just noting, the commenter you replied to said:
> use “bigint generated always as identity” instead of bigserial.
The commenter you are replying to was not saying anything about whether to use UUIDs or not; they just said "if you are going to use bigserial, you should use bigint generated always as identity instead".
The question is the same; why would you use bigint instead of the native UUID type?
Why does OT compare text and UUID instead of char(32) and UUID?
What advantage would there be for database abstraction libraries like SQLalchemy and Django to implement the UUID type with bigint or bigserial instead of the native pg UUID type?
Best practice in Postgres is to use always use the text data type and combine it with check constraints when you need an exact length or max length.
See: https://wiki.postgresql.org/wiki/Don't_Do_This#Text_storage
Also, I think you're misunderstanding the article. They aren't talking about storing a uuid in a bigint. They're talking about have two different id's. An incrementing bigint is used internally within the db for PK and FK's. A separate uuid is used as an external identifier that's exposed by your API.
What needs to be stored as text if there is a native uuid type?
Chapter 8. Data Types > Table 8.2. Numeric Types: https://www.postgresql.org/docs/current/datatype-numeric.htm... :
> bigint: -9223372036854775808 to +9223372036854775807
> bigserial: 1 to 9223372036854775807
2*63 == 9223372036854775807
Todo UUID /? postgres bigint UUID: https://www.google.com/search?q=postgres+bigint+uuid :
- UUIDs are 128 bits, and they're unsigned, so: 2*127
- "UUID vs Bigint Battle!!! | Scaling Postgres 302" https://www.scalingpostgres.com/episodes/302-uuid-vs-bigint-...
"Reddit's photo albums broke due to Integer overflow of Signed Int32" https://news.ycombinator.com/item?id=33976355#33977924 re: IPv6 addresses having 64+64=128 bits
FWIW networkx has an in-memory Graph.relabel_nodes() method that assigns ints to unique node names in order to reduce RAM utilization for graph algorithms: https://networkx.org/documentation/stable/reference/generate...
Many people store UUID's as text in the database. Needles to say, this is bad. TFA starts by proposing that it's bad, then does some tests to show why.
I'm not quite sure what all the links have to do with the topic at hand.
Which link are you concerned about the topicality of, in specific?
Shouldn't we then link to the docs on how many bits wide db datatypes are, whether a datatype is prefix or suffix searchable, whether there's data leakage in UUID namespacing with primary NIC MAC address and UUIDv7, and whether there will be overflow with a datatype less wasteful than the text datatype for uuids when there is already a UUID datatype for uuids that one could argue to improve if there is a potential performance benefit
Teaching general problem-solving skills is not a substitute for teaching math [pdf] (2010)
This comes down to the old saying "everything is memorization at the end of the day".
Some people literally memorize answers. Other folks memorize algorithms. Yet other folks memorize general collections of axioms/proofs and key ideas. And perhaps at the very top of this hierarchy is memorizing just generic problem solving strategies/learning strategies.
And while naively we might believe that "understanding is everything". It really isn't. Consider if you are in the middle of a calculus exam and need to evaluate $7 \times 8$ by calculating $7+7+7+7...$ and then proceed to count on your fingers up to 56 because even $7+7$ wasn't memorized. You're almost certainly not going to make it past the first problem on your exam even though you really do understand exactly whats going on .
Similar things are true for software engineering. If you have to stackoverflow every single line of code that you are attempting to write all the way down to each individual print statement and array access it doesn't fucking matter HOW well you understand whats going on/how clear your mental models are. You are simply not going to be a productive/useful person on a team.
At some point in order to be effective in any field you need to eventually just KNOW the field, meaning have memorized shortcuts and paths so that you only spend time working on the "real problem".
To really drive the point home. This is the difference between being "intelligent" versus "experienced".
> Consider if you are in the middle of a calculus exam and need to evaluate $7 \times 8$ by calculating $7+7+7+7...$ and then proceed to count on your fingers up to 56 because even $7+7$ wasn't memorized. You're almost certainly not going to make it past the first problem on your exam even though you really do understand exactly whats going on.
This is not a counterexample because exams aren't an end goal. The process of filling out exams isn't an activity that provides value to society.
If an exam poorly grades a student who would do great solving actual real-world problems, the exam is wrong. No ifs. No buts. The exam is wrong because it's failing the ultimate goal: school is supposed to increase people's value to society and help figure out where their unique abilities may be of most use.
> Similar things are true for software engineering. If you have to stackoverflow every single line of code that you are attempting to write all the way down to each individual print statement and array access it doesn't fucking matter HOW well you understand whats going on/how clear your mental models are. You are simply not going to be a productive/useful person on a team.
If their mental models are truly so amazing, they'd make a great (systems) architect without having to personally code much.
To know something includes speed of regurgitation. Consider a trauma surgeon. You want them to know, off the top of their head, lots of stuff. You don’t want them taking their time and looking things up. You don’t want them redefining everything from first principles each time they perform surgery.
Knowing a topic includes instant recall of a key body of knowledge.
Maybe survey engineers with a first order derivative question and a PDE question n years after graduation with credential?
CAS and automated tests wins again.
A robosurgeon tech that knows to stop and read the docs and write test assertions may have more total impact.
I’m ABD in math. It was 30 years ago that I decided to not get a Ph.D. because I realized that I was never going to be decent at research. In the last 30 years I have forgotten a great of mathematics. It is no longer true that I know Galois Theory. I used to know it and I know the basic idea behind and I believe I can fairly easily relearn it. But right now I don’t know it.
That's wild, we all use AES cipers w/ TLS/HTTPS everyday - and Galois fields are essential to AES - but few people understand how HTTPS works.
The field is probably onto post-AES, PQ algos where Galois Theory is less relevant; but back then, it seemed like everyone needed to learn Galois Theory, which is or isn't prerequisite to future study.
The problem-solving skills are what's still useful.
Perhaps problem-solving skills cannot be developed without such rote exercises; and perhaps the content of such rote exercises is not relevant to predicting career success.
First anode-free sodium solid-state battery
Na4MnCr(PO4)3
Chromium is 5 times more abundant than Lithium in earth crust (0.01% vs 0.002%). Better, but not that much ?
"Regular" sodium-ion batteries with prussian blue has, it seems, the great advantage of not using any scarce elements. It would be nice to have a comparison between this solid state chemistry and the regular one.
What are the processes for recovering Chromium (Cr) when recycling batteries after a few hundred cycles?
Is it Trivalent, Hexavalent, or another form of Chromium?
Chromium > Precautions: https://en.wikipedia.org/wiki/Chromium#Precautions
Erin Brockovich: https://en.wikipedia.org/wiki/Erin_Brockovich :
> [average Cr-6 levels in wells; public health] In October 2022, even though the EPA announced Cr-6 was likely carcinogenic if consumed in drinking water, The American Chemistry Council, an industry lobby group, disputed their finding. [18]
Hopefully it's dietary chromium, not Hexavalent chromium (Cr-6).
(My comment on this is unexplainedly downvoted to 0?)
Again, would battery recycling processes affect the molecular form of Chromium?
Using copper to convert CO₂ to methane
Actual title: "Using copper to convert CO₂ to methane could be game changer in mitigating climate change".
Is there demand for methane? Why are there so many methane flares at oil wells in Texas, then?
At the rate solar, wind, and batteries are coming along, carbon capture is a waste of time and resources. Price alone is going to eliminate most demand for carbon based fuels. This is happening much faster than expected. See last week's Economist.
How are they planning to produce methane for return flights from Mars? There is Copper on Mars.
"Copper nanoclusters: Selective CO2 to methane conversion beyond 1 A/cm²" (2024) https://www.sciencedirect.com/science/article/pii/S092633732...
Ask HN: Solutions for Sustainable Tires before 2050?
Almost all automotive tires are made of synthetic "rubber".
Synthetic rubber tires pollute microplastics which harm aquatic life including salmon.
So, we need to make tires out of different materials in order to spare the environment, society and public health, and economy.
What are some potential solutions for sustainable tire products and production?
"Here’s how Michelin plans to make its tires more renewable: The tire company wants a completely sustainable tire by 2050" (2024) https://arstechnica.com/cars/2024/07/heres-how-michelin-plan...
"Michelin Sustainable Tires Use a Rice Byproduct" https://www.motortrend.com/features/michelin-sustainable-ric...
OTOH, Dandelion rubber (Taraxagum), Hemp
Dandelion Rubber; Taraxagum.
- "Tire dust makes up the majority of ocean microplastics" https://news.ycombinator.com/item?id=37728005
- Taraxagum is extractable with just water FWIU
- Would GMO dandelions be easier to scale up with traditional and vertical gardening? How could dandelions be genetically-engineered for use in sustainable tires? Larger stems, less or no seeds to avoid clogging up vertical gardening,
- What are some uses for the non-stem waste part of dandelion flowers?
Hemp "rubber"
/? hemp rubber https://www.google.com/search?q=hemp+rubber
"New Green Polymeric Composites Based on Hemp and Natural Rubber Processed by Electron Beam Irradiation" (2014) https://onlinelibrary.wiley.com/doi/10.1155/2014/684047 51 citations : https://scholar.google.com/scholar?cites=1043509048813828783... :
> Abstract: A new polymeric composite based on natural rubber reinforced with hemp has been processed by electron beam irradiation and characterized by several methods. The mechanical characteristics: gel fraction, crosslink density, water uptake, swelling parameters, and FTIR of natural rubber/hemp fiber composites have been investigated as a function of the hemp content and absorbed dose. Physical and mechanical properties present a significant improvement as a result of adding hemp fibres in blends. Our experiments showed that the hemp fibers have a reinforcing effect on natural rubber similar to mineral fillers (chalk, carbon black, silica). The crosslinking rates of samples, measured using the Flory-Rehner equation, increase as a result of the amount of hemp in blends and the electron beam irradiation dose increasing. The swelling parameters of samples significantly depend on the amount of hemp in blends, because the latter have hydrophilic characteristics.
Is there a dandelion hemp blend that's good for tires?
Michelin has rice byproduct in their current Most Sustainable Tire.
Maybe dandelion stems, some part of the industrial hemp plant, and rice byproduct?
With processing by IDK cheaper Electron Beam Irradiation too?
"Goodyear's new prototype tires are made of soybeans — and they could majorly improve your gas mileage: Tire manufacturer Goodyear has made a pledge to create a tire made from 100% sustainable materials by 2030." https://www.thecooldown.com/green-tech/goodyear-sustainable-...
We can't even be assed to mandate repairable smartphones in 2024. People don't care about our environment, they care about affordable technology that does what they want while marketing the values they respect to them. Tire manufacturers, big tech, and most sizable businesses have figured this out by now. Even if you invented a miracle rubber alternative, it was cheaper than synthetic rubber, and you made it freely available, tire manufacturers would still undercut each other until they had the highest-margin product. Any way you slice it, our miraculous free market exclusively incentivizes manufacturers to destroy the one planet our human race has.
My advice? Give up on the environment. You're on Hacker News, the majority of this website would happily destroy the earth for decades to come if it meant they could be king for a year. Even outside HN, who consciously considers the environment when making purchasing decisions anymore? Who stands up to the lobbyists that stop pro-environmental change from happening? Nobody. We suffer because people with money insist it's the only way we can live, and biodegradable tires aren't going to change that political status-quo.
Well, good luck with that approach, its outcomes, the costs and losses including poor food quality, and the gosh darn heat.
It's not my approach, it's our approach. I don't like it either, but I also lack the determinate capacity to change it. It's easier to just acknowledge that nobody values anything besides greed and resource competition unless the monopoly on violence demands otherwise.
A: Looking to share solutions for [XYZ]
B: Don't even try, it's hopeless, just do hedonism without consideration like everyone else
A: Use market incentives to change production and consumption behavior to head off costs and loss.
It's less of a chicken-and-egg problem and more of a cart-and-horse one. You fundamentally cannot promote a new solution in a market that won't respect the values you're trying to promote. Hedonism doesn't drive synthetic rubber sales, price-sensitive customers do.
> price-sensitive customers do
So we should fight with poverty.
Fixing poverty doesn't fix the free market. Customers will be price-sensitive no matter how rich they are because tires are a utility we don't benefit from splurging on. In other market segments, it's convenience or proprietary lock-in that separates them from positive alternatives.
The only way to "fix" it in America is to legislate the tire manufacturers into compliance. And when you do that people will kick and scream and say you've ruined their free market, so you've got to pick your poison and stand by it.
Anxious Generation – How Safetyism and Social Media Are Damaging the Kids
Does anxiety correlate to inflammatory ultra-processed diets?
Is there an incentive to self-report anxiety?
How does this connect to the article at all?
There is an epidemiological uptick in anxiety diagnosis rate; hence the article title: "Anxious Generation."
Is that due to broader trends in public health like lack of exercise and poor diet, specifically inflammatory foods like ultra-processed foods, exposure to food packaging, plastics, waterproofing chemicals, or other environmental contaminants?
What correlations in social research and human behavior can we identify?
Is there an increase in anxiety diagnoses in states with medical discounts for anxiety treatments?
YouTube's eraser tool removes copyrighted music without impacting other audio
I guess it's good that there's a new solution to this problem, but I wish it just wasn't a problem in the first place. It's ridiculous that someone can't just walk around outside, making a video of what they're doing, and post it, without having to worry that some copyrighted music is playing in the background for some portion of it.
That sort of situation should be considered fair use, and copyright owners who attempt to make trouble for people caught up in this situation should be slapped down, hard, and fined.
> walk around outside, making a video of what they're doing, and post it, without having to worry that some copyrighted music is playing in the background for some portion of it.
You can, this is called "incidental use" and is an exception to copyright in the USA. If you're filming yourself going down the street and someone starts playing a song, and you're not going out of your easy to capture the copyrighted content, it is legal.
YouTube's copyright strike system is more strict than the law, probably to kowtow to the music companies serving the YouTube Music product in exchange for not having to defend against a lawsuit.
If we weren't subject to their monopoly on online video, someone could just start telling copyright trolls to pound sand when a frivolous complaint comes up.
In addition to incidental use and and law enforcement and investigative purposes , copyrighted music in a YouTube video could also be Academic Use, journalism / criticism, sufficiently transformative, a mixtape-style compilation,.
Someday hopefully the musical copyright folks on YouTube will share revenue with Visual Artists on there, too.
Txtai – A Strong Alternative to ChromaDB and LangChain for Vector Search and RAG
https://news.ycombinator.com/item?id=40115099 :
> lancedb/lance is faster than pandas with dtype_backend="arrow" and has a vector index
From https://github.com/lancedb/lance :
> Modern columnar data format for ML and LLMs implemented in Rust. Convert from parquet in 2 lines of code for 100x faster random access, vector index, and data versioning. Compatible with Pandas, DuckDB, Polars, Pyarrow,
> [...] Vector Search
> Comparison of different data formats in each stage of ML development cycle: Lance, Parquet & ORC, JSON & XML, TFRecord, Database, Warehouse
What can ChromaDB do that lancedb can't, and will either work in WASM?
From https://github.com/chroma-core/chroma :
> By default, Chroma uses Sentence Transformers to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.
Hydrothermal environment discovered deep beneath the ocean
Nature article: https://www.nature.com/articles/s41598-024-60802-3
"Discovery of the first hydrothermal field along the 500-km-long Knipovich Ridge offshore Svalbard (the Jøtul field)" (2024) https://www.nature.com/articles/s41598-024-60802-3 :
> "The newly discovered hydrothermal field, named Jøtul hydrothermal field, is associated with the eastern bounding fault of the rift valley rather than with an axial volcanic ridge. Guided by physico-chemical anomalies in the water column, ROV investigations on the seafloor showed a wide variety of fluid escape sites, inactive and active mounds with abundant hydrothermal precipitates, and chemosynthetic organisms. Fluids with temperatures between 8 and 316 °C as well as precipitates were sampled at four vent sites. High methane, carbon dioxide, and ammonium concentrations, as well as high [87/86] Sr isotope ratios of the vent fluids indicate strong interaction between magma and sediments from the Svalbard continental margin. Such interactions are important for carbon mobilization at the seafloor and the carbon cycle in the ocean
Does that help confirm or reject thi?s:
"Dehydration melting at the top of the lower mantle" (2014) https://www.science.org/doi/10.1126/science.1253358 :
> They conclude that the mantle transition zone — 410 to 660 km below Earth's surface — acts as a large reservoir of water.
I don't think it has anything to do with that. The water in these zones is seawater convecting down, then up, through the hot rock.
Mimicking the cells that carry hemoglobin as a blood substitute
Another solution; from "Scientists Find a Surprising Way to Transform A and B Blood Types Into Universal Blood" (2024) https://singularityhub.com/2024/04/29/scientists-find-a-surp... :
> Let’s Talk ABO+: When tested in clinical trials, converted blood has raised safety concerns. Even when removing A or B antigens completely from donated blood, small hints from earlier studies found an immune mismatch between the transformed donor blood and the recipient. In other words, the engineered O blood sometimes still triggered an immune response.
The sad state of property-based testing libraries
With the advent of coverage based fuzzing, and how well supported it is in Go, what am I missing from not using one of the property based testing libraries?
https://www.tedinski.com/2018/12/11/fuzzing-and-property-tes...
Like, with the below fuzz test, and the corresponding invariant checks, isn't this all but equivalent to property tests?
https://github.com/ncruces/aa/blob/505cbbf94973042cc7af4d6be...
https://github.com/ncruces/aa/blob/505cbbf94973042cc7af4d6be...
The distinction between property based testing and fuzzing is basically just a rough cluster of vibes. It describes a real difference, but the borders are pretty vague and precisely deciding which things are fuzzing and which are PBT isn’t really that critical.
- Quick running tests, detailed assertions —> PBT
- Longer tests, just looking for a crash —> fuzzing.
- In between, who knows?
https://hypothesis.works/articles/what-is-property-based-tes...
Interesting read. I guess I'm mostly interested in on number 2 here:
Under this point of view, a property-based testing library is really two parts:
1. A fuzzer.
2. A library of tools for making it easy to construct property-based tests using that fuzzer.
What should I (we?) be building on top of Go's fuzzer to make it easier to construct property-based tests?My current strategy is to interpret a byte stream as a sequence of "commands", let those be transitions that drive my "state machine", then test all invariants I can think of at every step.
I don’t know much about that specific fuzzer; but all you need is a/ consistent and controlled randomness, b/ combinators, c/ reducers.
What you describe (commands) is commonly used with model based testing and distributed systems.
TLA+, Formal Methods in Python: FizzBee, Nagini, Deal-solver, Dafny: https://news.ycombinator.com/item?id=39938759 :
> Python is Turing complete, but does [TLA,] need to be? Is there an in-Python syntax that can be expanded in place by tooling for pretty diffs; How much overlap between existing runtime check DbC decorators and these modeling primitives and feature extraction transforms should there be? (In order to: minimize cognitive overload for human review; sufficiently describe the domains, ranges, complexity costs, inconstant timings, and the necessary and also the possible outcomes given concurrency,)
From "S2n-TLS – A C99 implementation of the TLS/SSL protocol" https://news.ycombinator.com/item?id=38510025 :
> But formal methods (and TLA+ for distributed computation) don't eliminate side channels. [in CPUs e.g. with branch prediction, GPUs, TPUs/NPUs, Hypervisors, OS schedulers, IPC,]
Still though, coverage-based fuzzing;
From https://news.ycombinator.com/item?id=30786239 :
> OSS-Fuzz runs CloudFuzz[Lite?] for many open source repos and feeds OSV OpenSSF Vulnerability Format: https://github.com/google/osv#current-data-sources
From "Automated Unit Test Improvement using Large Language Models at Meta" https://news.ycombinator.com/item?id=39416628 :
> "Fuzz target generation using LLMs" (2023), OSSF//fuzz-introspector*
> gh topic: https://github.com/topics/coverage-guided-fuzzing
The Fuzzing computational task is similar to the Genetic Algorithm computational task, in that both explore combinatorial Hilbert spaces of potentially infinite degree and thus there is need for parallelism and thus there is need for partitioning for distributed computation. (But there is no computational oracle to predict that any particular sequence of combinations of inputs under test will deterministically halt on any of the distributed workers, so second-order methods like gradient descent help to skip over apparently desolate territory when the error hasn't changed in awhile)
The Fuzzing computational task: partition the set of all combinations of inputs for distributed execution with execute-once or consensus to resolve redundant results.
DbC Design-By-Contract patterns include Preconditions and Postconditions (which include tests of Invariance)
We test Preconditions to exclude Inputs that do not meet the specified Ranges, and we verify the Ranges of Outputs in Postconditions.
We test Invariance to verify that there haven't been side-effects in other scopes; that variables and their attributes haven't changed after the function - the Command - returns.
DbC: https://en.wikipedia.org/wiki/Design_by_contract :
> Design by contract has its roots in work on formal verification, formal specification and Hoare logic.
TLA+ > Language: https://en.wikipedia.org/wiki/TLA%2B#Language
Formal verification: https://en.wikipedia.org/wiki/Formal_verification
From https://news.ycombinator.com/item?id=38138319 :
> Property testing: https://en.wikipedia.org/wiki/Property_testing
> awesome-python-testing#property-based-testing: https://github.com/cleder/awesome-python-testing#property-ba...
> Fuzzing: https://en.wikipedia.org/wiki/Fuzzing
Software testing > Categorization > [..., Property testing, Metamorphic testing] https://en.wikipedia.org/wiki/Software_testing#Categorizatio...
--
From https://news.ycombinator.com/item?id=28494885#28513982 :
> https://github.com/dafny-lang/dafny #read-more
> Dafny Cheat Sheet: https://docs.google.com/document/d/1kz5_yqzhrEyXII96eCF1YoHZ...
> Looks like there's a Haskell-to-Dafny converter.
haskell2dafny: https://gitlab.doc.ic.ac.uk/dcw/haskell-subset-to-dafny-tran...
--
Controlled randomness: tests of randomness, random uniform not random norm, rngd, tests of randomness:
From https://news.ycombinator.com/item?id=40630177 :
> google/paranoid_crypto.lib.randomness_tests: https://github.com/google/paranoid_crypto/tree/main/paranoid... docs: https://github.com/google/paranoid_crypto/blob/main/docs/ran...
'Chemical recycling': 15-minute reaction turns old clothes into useful molecules
> The researchers instead turned to chemical recycling to break down some synthetic components of fabrics into reusable building blocks. They used a chemical reaction called microwave-assisted glycolysis, which can break up large chains of molecules — polymers — into smaller units, with the help of heat and a catalyst. They used this to process fabrics with different compositions, including 100% polyester and 50/50 polycotton, which is made up of polyester and cotton.
> For pure polyester fabric, the reaction converted 90% of the polyester into a molecule called BHET, which can be directly recycled to create more polyester textiles. The researchers found that the reaction didn’t affect cotton, so in polyester–cotton fabrics, it was possible to both break down the polyester and recover the cotton. Crucially, the team was able to optimise the reaction conditions so that the process took just 15 minutes, making it extremely cost-effective.
1. Flash heating plastic yields Hydrogen and Graphene; "Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763 https://news.ycombinator.com/item?id=37886982
2. This produces microwave (and THz) with Copper as the dielectric; though I don't think they were trying to make a microwave oven; "Pulsed THz radiation under ultrafast optical discharge of vacuum photodiode" https://news.ycombinator.com/item?id=40858955
3. Metasurface arrays could probably also waste less energy on microwaving clothing and textiles in order to recycle.
4. Are SPPs distinct from Cherenkov radiation, and photon-electron-phonon vorticity?
From "Electrons turn piece of wire into laser-like light source" https://news.ycombinator.com/item?id=33493885 :
Surface plasmon polaritons : https://en.wikipedia.org/wiki/Surface_plasmon_polaritons :
> [...] SPPs enable subwavelength optics in microscopy and photolithography beyond the diffraction limit. SPPs also enable the first steady-state micro-mechanical measurement of a fundamental property of light itself: the momentum of a photon in a dielectric medium. Other applications are photonic data storage, light generation, and bio-photonics.
But that's IR and visible light EM, not microwave EM.
Scientists discover way to 'grow' sub-nanometer sized transistors
"Integrated 1D epitaxial mirror twin boundaries for ultra-scaled 2D MoS2 field-effect transistors" (2024) https://www.nature.com/articles/s41565-024-01706-1
A similar process for graphene-based transistors would change the world.
"Ultrahigh-mobility semiconducting epitaxial graphene on silicon carbide" (2024) https://www.nature.com/articles/s41586-023-06811-0 .. "Researchers create first functional graphene semiconductor" https://news.ycombinator.com/item?id=38869171
GPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity
I never understood why gpus can't use pluggable memories, like what the cpus can?
Cost originally, more recently probably so vendors can segment the market better for different products, i.e. you now have to get the fastest processor to get the most ram, regardless of your workload.
GPUs can now use PCIe-attached memory or SSDs to boost VRAM capacity
Good job decreasing latency.
Now work on the bandwidth.
A single HBM3 module has the bandwidth of half-a-dozen data center grade PCIe 5.0 x16 NVME drives.
A single DDR5 DIMM has the bandwidth of a pair of PCIe 5.0 x4 NVME drives.
"New RISC-V microprocessor can run CPU, GPU, and NPU workloads simultaneously" https://news.ycombinator.com/item?id=39938538
Show HN: Jb / json.bash – Command-line tool (and bash library) that creates JSON
jb is a UNIX tool that creates JSON, for shell scripts or interactive use. Its "one thing" is to get shell-native data (environment variables, files, program output) to somewhere else, using JSON encapsulate it robustly.
I wrote this because I wanted a robust and ergonomic way to create ad-hoc JSON data from the command line and scripts. I wanted errors to not pass silently, not coerce data types, not put secrets into argv. I wanted to leverage shell features/patterns like process substitution, environment variables, reading/streaming from files and null-terminated data.
If you know of the jo program, jb is similar, but type-safe by default and more flexible. jo coerces types, using flags like -n to coerce to a specific type (number for -n), without failing if the input is invalid. jb encodes values as strings by default, requiring type annotations to parse & encode values as a specific type (failing if the value is invalid).
If you know jq, jb is complementary in that jq is great at transforming data already in JSON format, but it's fiddly to get non-JSON data into jq. In contrast, jb is good at getting unstructured data from arguments, environment variables and files into JSON (so that jq could use it), but jb cannot do any transformation of data, only parsing & encoding into JSON types.
I feel rather guilty about having written this in bash. It's something of a boiled frog story. I started out just wanting to encode JSON strings from a shell script, without dependencies, with the intention of piping them into jq. After a few trials I was able to encode JSON strings in bash with surprising performance, using array operations to encode multiple strings at once. It grew from there into a complete tool. I'd certainly not choose bash if I was starting from scratch now...
jshn.sh: https://openwrt.org/docs/guide-developer/jshn src: https://git.openwrt.org/?p=project/libubox.git;a=blob;f=sh/j... :
> jshn (JSON SHell Notation), a small utility and shell library for parsing and generating JSON data
The Hunt for the Most Efficient Heat Pump in the World
> https://HeatPumpMonitor.org/ is a unique resource at present. It can be difficult to locate data on live heat pump performance. WIRED spent weeks scouring the web and speaking to experts to find examples of heat pump systems that could beat Rob Ritchie’s current SCOP and found only a handful of possible contenders.
Neuroscientists must not be afraid to study religion
- /? religion out of body experiences and epilepsy neuroimaging: https://www.google.com/search?q=religion+out+of+body+experie...
- /? auditory cortex verbal hallucinations: https://www.google.com/search?q=auditory%20cortex%20verbal%2...
- /? auditory cortex for schizophrenia treatment: https://www.google.com/search?q=auditory%20cortex%20for%20sc...
- Humanism > Varieties of humanism: https://en.wikipedia.org/wiki/Humanism#Varieties_of_humanism re: the many ways bootstrapping a sufficient morality because the Golden Rule and the 3 Laws of Robotics are insufficient in comparison to e.g. statutes printed out every year; and because LLMs lack reasoning, inference, and critical thinking and thus also ethics.
Sunmaxx PVT, Oxford PV launch perovskite-silicon TPV with 80% overall efficiency
> The new photovoltaic-thermal module, presented for the first time on the first day of Intersolar 2024, has a record electrical efficiency of 26.6% and thermal efficiency of 53.4%. The electrical output of the module with 6cm x 10cm M6 cells is 433 W
- Intersolar 2024 Liveblog: https://www.pv-magazine.com/2024/06/19/intersolar-2024-day-1...
- /?gnews Intersolar 2024: https://www.google.com/search?q=intersolar%202024&tbm=nws
- "French startup offers concrete solar carport" (2024-05) https://www.pv-magazine.com/2024/05/24/french-startup-offers...
- Concrete superconductor batteries that would work in datacenters with controlled relative humidity, but don't work unless kept dry: "Low-cost additive turns concrete slabs into super-fast energy storage" (2023-08) https://news.ycombinator.com/item?id=36964735
Google's carbon emissions surge nearly 50% due to AI energy demand
A few solutions for this:
- "Ask HN: How to reuse waste heat and water from AI datacenters?" (2024) https://news.ycombinator.com/item?id=40820952
- "Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024) https://news.ycombinator.com/item?id=40719725
(Edit)
- "Desalination system could produce freshwater that is cheaper than tap water" (2023) : "Extreme salt-resisting multistage solar distilation with thermohaline convection" (2023) https://news.ycombinator.com/item?id=39999225 ... "AI Is Accelerating the Loss of Our Scarcest Natural Resource: Water" https://news.ycombinator.com/item?id=40783690
- "Ask HN: Does mounting servers parallel with the temperature gradient trap heat?" (2020) https://news.ycombinator.com/item?id=23033210 :
> Would mounting servers sideways (vertically) allow heat to transfer out of the rack? https://news.ycombinator.com/item?id=39555606
- Concrete parking lot TPV Thermal Solar canopies, And concrete supercapacitor batteries: https://news.ycombinator.com/item?id=40862190
Ladybird Web Browser becomes a non-profit with $1M from GitHub Founder
OTOH feature ideas: Formal Verification, Process Isolation, secure coding in Rust,
- Quark is written in Coq and is formally verified. What can be learned from the design of Quark and other larger formally-verified apps.
From "Why Don't People Use Formal Methods?" (2019) https://news.ycombinator.com/item?id=18965964 :
> - "Quark : A Web Browser with a Formally Verified Kernel" (2012) (Coq, Haskell) http://goto.ucsd.edu/quark/
From https://news.ycombinator.com/item?id=37451147 :
> - "How to Discover and Prevent Linux Kernel Zero-day Exploit using Formal Verification" (2021) [w/ Coq] http://digamma.ai/blog/discover-prevent-linux-kernel-zero-da... https://news.ycombinator.com/item?id=31617335
- Rootless containers require /etc/subuids to remap uids. Browsers could run subprocesses like rootless containers in addition to namespaces and application-level sandboxing.
- Chrome and Firefox use the same pwn2own'd sandbox.
- Container-selinux and rootless containers and browser tab processes
- "Memory Sealing "Mseal" System Call Merged for Linux 6.10" (2024) https://news.ycombinator.com/item?id=40474551
- Endokernel process isolation: From "The Docker+WASM Technical Preview" https://news.ycombinator.com/item?id=33324934
- QubesOS isolates processes with VMs.
- Gvisor and Kata containers further isolate container processes
- W3C Web Worker API and W3C Service Worker API and process isolation, and resource utilization
- From "WebGPU is now available on Android" https://news.ycombinator.com/item?id=39046787 :
>> What are some ideas for UI Visual Affordances to solve for bad UX due to slow browser tabs and extensions?
>> - [ ] UBY: Browsers: Strobe the tab or extension button when it's beyond (configurable) resource usage thresholds
>> - [ ] UBY: Browsers: Vary the {color, size, fill} of the tabs according to their relative resource utilization
>> - [ ] ENH,SEC: Browsers: specify per-tab/per-domain resource quotas: CPU
- What can be learned from few methods and patterns from rust rewrites, again of larger applications
"MotorOS: a Rust-first operating system for x64 VMs" https://news.ycombinator.com/item?id=38907876 :
> "Maestro: A Linux-compatible kernel in Rust" (2023) https://news.ycombinator.com/item?id=38852360#38857185 ; redox-os, cosmic-de , Motūrus OS; MotorOS
From "Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024) https://news.ycombinator.com/item?id=34743393 :
> - "The Rust Implementation of GNU Coreutils Is Becoming Remarkably Robust" https://news.ycombinator.com/item?id=34743393
> [Rust Secure Coding Guidelines, awesome-safety-critical,]
Pulsed THz radiation under ultrafast optical discharge of vacuum photodiode
"Pulsed THz radiation under ultrafast optical discharge of vacuum photodiode" (2024) https://link.springer.com/article/10.1007/s12200-024-00123-5 :
> Abstract: In this paper, we first present an experimental demonstration of terahertz radiation pulse generation with energy up to 5 pJ under the electron emission during ultrafast optical discharge of a vacuum photodiode. We use a femtosecond optical excitation of metallic copper photocathode for the generation of ultrashort electron bunch and up to 45 kV/cm external electric field for the photo-emitted electron acceleration. Measurements of terahertz pulses energy as a function of emitted charge density, incidence angle of optical radiation and applied electric field have been provided. Spectral and polarization characteristics of generated terahertz pulses have also been studied. The proposed semi-analytical model and simulations in COMSOL Multiphysics prove the experimental data and allow for the optimization of experimental conditions aimed at flexible control of radiation parameters.
- "When ultrashort electron bunch accelerates and drastically stops, it can generate terahertz radiation" (2024) https://phys.org/news/2024-07-ultrashort-electron-bunch-dras...
Photocathode, > Photocathode materials doesn't list Cu (Copper) https://en.wikipedia.org/wiki/Photocathode
"Earth-abundant Cu-based metal oxide photocathodes for photoelectrochemical water splitting" (2020) https://pubs.rsc.org/en/content/articlelanding/2020/ee/d0ee0...
LightSlinger antennae are FTL within the dielectric, but the EMR is not FTL; from https://news.ycombinator.com/item?id=37342016 :
> "Smaller, more versatile antenna could be a communications game-changer" (2022) https://news.ycombinator.com/item?id=37337628 :
> LightSlingers use volume-distributed polarization currents, animated within a dielectric to faster-than-light [FTL] speeds, to emit electromagnetic waves. (By contrast, traditional antennas employ surface currents of subluminally moving massive particles on localized metallic elements such as dipoles.) Owing to the superluminal motion of the radiation source, LightSlingers are capable of “slinging” tightly focused wave packets with high precision toward a location of choice. This gives them potential advantages over phased arrays in secure communications such as 4G and 5G local networks
From https://news.ycombinator.com/item?id=39365579 :
> "Coherent interaction of a-few-electron quantum dot with a terahertz optical resonator" (2023) https://arxiv.org/abs/2204.10522 ; "Researchers solve a foundational problem in transmitting quantum information" (2024) [w/ AlGaAs/GaAs]
Though the [qubit wave function transmission range] on that is only tens to a hundred micrometers. With 5pJ microwave like OT, what is the range?
Ancient lingams had Copper (Cu) and Gold (Au), and crystal FWIU.
From "Can you pump water without any electricity?" https://news.ycombinator.com/item?id=40619745 :
> - /? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
- "High-sensitivity terahertz detection by 2D plasmons in transistors" (2024) https://news.ycombinator.com/item?id=38800406
America's startup boom around remote work and technology is still going strong
Nuclear spectroscopy breakthrough could rewrite fundamental constants of nature
> When trapped in a transparent, flourine-rich crystal, scientists can use a laser to excite the nucleus of a thorium-229 atom.
> [...] This accomplishment means that measurements of time, gravity and other fields that are currently performed using atomic electrons can be made with orders of magnitude higher accuracy
[deleted]
[deleted]
Show HN: Drop-in SQS replacement based on SQLite
Hi! I wanted to share an open source API-compatible replacement for SQS. It's written in Go, distributes as a single binary, and uses SQLite for underlying storage.
I wrote this because I wanted a queue with all the bells and whistles - searching, scheduling into the future, observability, and rate limiting - all the things that many modern task queue systems have.
But I didn't want to rewrite my app, which was already using SQS. And I was frustrated that many of the best solutions out there (BullMQ, Oban, Sidekiq) were language-specific.
So I made an SQS-compatible replacement. All you have to do is replace the endpoint using AWS' native library in your language of choice.
For example, the queue works with Celery - you just change the connection string. From there, you can see all of your messages and their status, which is hard today in the SQS console (and flower doesn't support SQS.)
It is written to be pluggable. The queue implementation uses SQLite, but I've been experimenting with RocksDB as a backend and you could even write one that uses Postgres. Similarly, you could implement multiple protocols (AMQP, PubSub, etc) on top of the underlying queue. I started with SQS because it is simple and I use it a lot.
It is written to be as easy to deploy as possible - a single go binary. I'm working on adding distributed and autoscale functionality as the next layer.
Today I have search, observability (via prometheus), unlimited message sizes, and the ability to schedule messages arbitrarily in the future.
In terms of monetization, the goal is to just have a hosted queue system. I believe this can be cheaper than SQS without sacrificing performance. Just as Backblaze and Minio have had success competing in the S3 space, I wanted to take a crack at queues.
I'd love your feedback!
Could you elaborate why you didn't choose e.g. RabbitMQ? I mean you talk about AMQP and such, it seems to me, that a client-side abstraction would be much more efficient in providing an exit strategy for SQS than creating an SQS "compatible" broker. For example in the Java ecosystem there is https://smallrye.io/smallrye-reactive-messaging/latest/ that serves a similar purpose.
Where AWS is the likely migration path for an app if it needs to be scaled beyond dev containers, already having tested with an SQS workalike prevents rework.
The celery Backends and Brokers docs compare SQS and RabbitMQ AMQP: https://docs.celeryq.dev/en/stable/getting-started/backends-...
Celery's flower utility doesn't work with SQS or GCP's {Cloud Tasks, Cloud Pub/Sub, Firebase Cloud Messaging FWIU} but does work with AMQP, which is a reliable messaging protocol.
RabbitMQ is backed by mnesia, an Erlang/OTP library for distributed Durable data storage. Mnesia: https://en.wikipedia.org/wiki/Mnesia
SQLite is written in C and has lots of tests because aerospace IIUC.
There are many extensions of SQLite; rqlite, cr-sqlite, postlite, electricsql, sqledge, and also WASM: sqlite-wasm, sqlite-wasm-http
celery/kombu > Transport brokers support / comparison table: https://github.com/celery/kombu?tab=readme-ov-file#transport...
Kombu has supported Apache Kafka since 2022, but celery doesn't yet support Kafka: https://github.com/celery/celery/issues/7674#issuecomment-12...
I don't get where you are pointing at.
RabbitMQ and other MOMs like Kafka are very versatile. What is the use case for not using SQS right now, but maybe later?
And if there is a use case (e.g. production-grade on-premise deployment), why not a client-side facade for a production-grade MOM (e.g. in Celery instead of sqs: amqp:)? Most MOMs should be more feature-rich than SQS. At-least-once delivery without pub/sub is usually the baseline and easy to configure.
I mean if this project reaches its goal to provide an SQS compatible replacement, that is nice, but I wonder if such a maturity comes with the complexity this project originally wants to avoid.
When you are developing an information system but don't want to pay for SQS in development or in production; for running tests with SQLite instead of MySQL/Postgres though.
SQS and heavier ESBs are overkill for some applications, and underkill for others where an HA configuration for the MQ / task queue is necessary.
Tests might be a valid use case, but doesn't seem to be the goal here and there are some solutions out there if you dont want to start a lightweight broker in a container.
Why would you want to use the SQS protocol in production without targeting the SQS "broker" as well? The timing and the AWS imposed quotas are bound to be different.
There are plenty brokers that fit different needs. I don't see the SQS protocol especially with security and so on in mind as a good fit in this case.
The switching cost from local almost-SQS to expensive HA SQS for scale and/or the client.
SQS is not a reliable exactly-once messaging protocol like AMQP, and it doesn't do task-level accounting or result storage (which SQLite also solves for).
Apache Kafka > See also: https://en.wikipedia.org/wiki/Apache_Kafka
SQS is not particularly expensive.
I don't know where you are getting at. I work on a project with SQS and I worked with RabbitMQ, ArtemisMQ and other MOM technologies as well. At least once delivery is something you can achieve easily. This would be the common ground that SQS can also provide. The same is true for Kafka.
Booting Linux off of Google Drive
Back in the the day it was possible to boot Sun Solaris over HTTP. This was called wanboot. This article reminded me of that.
This was basically an option of the OpenBoot PROM firmware of the SPARC machines.
It looked like this (ok is the forth prompt of the firmware):
ok setenv network-boot-arguments dhcp,hostname=myclient,file=https://192.168.1.1/cgi-bin/wanboot-cgi
ok boot net
This doesn't only load the initramfs over the (inter)network but also the kernel.https://docs.oracle.com/cd/E26505_01/html/E28037/wanboottask...
https://docs.oracle.com/cd/E19253-01/821-0439/wanboottasks2-...
Booting over HTTP would be interesting for device like Raspberry. Then you could run without memory card and have less things to break.
I would also prefer HTTP, but Pis can use PXE boot and mount their root filesystem over NFS already:) Official docs are https://www.raspberrypi.com/documentation/computers/raspberr... and they have a tutorial at https://www.raspberrypi.com/documentation/computers/remote-a...
Once you have PXE you can do all the things -- NFS boot, HTTP boot, iSCSI boot, and so on. There are several open source projects that support this. I think the most recent iteration is iPXE.
iPXE: https://en.wikipedia.org/wiki/IPXE :
> While standard PXE clients use only TFTP to load parameters and programs from the server, iPXE client software can use additional protocols, including HTTP, iSCSI, ATA over Ethernet (AoE), and Fibre Channel over Ethernet (FCoE). Also, on certain hardware, iPXE client software can use a Wi-Fi link, as opposed to the wired connection required by the PXE standard.
Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time, how does SecureBoot work with iPXE?
> Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time
For HTTPS booting, yes.
> how does SecureBoot work with iPXE?
It doesn't, unless you manage to get your iPXE (along with everything else in the chain of control) signed.
Show HN: Adding Mistral Codestral and GPT-4o to Jupyter Notebooks
Hey HN! We’ve forked Jupyter Lab and added AI code generation features that feel native and have all the context about your notebook. You can see a demo video (2 min) here: https://www.tella.tv/video/clxt7ei4v00rr09i5gt1laop6/view
Try a hosted version here: https://pretzelai.app
Jupyter is by far the most used Data Science tool. Despite its popularity, it still lacks good code-generation extensions. The flagship AI extension jupyter-ai lags far behind in features and UX compared to modern AI code generation and understanding tools (like https://www.continue.dev and https://www.cursor.com). Also, GitHub Copilot still isn’t supported in Jupyter, more than 2 years after its launch. We’re solving this with Pretzel.
Pretzel is a free and open-source fork of Jupyter. You can install it locally with “pip install pretzelai” and launch it with “pretzel lab”. We recommend creating a new python environment if you already have jupyter lab installed. Our GitHub README has more information: https://github.com/pretzelai/pretzelai
For our first iteration, we’ve shipped 3 features:
1. Inline Tab autocomplete: This works similar to GitHub Copilot. You can choose between Mistral Codestral or GPT-4o in the settings
2. Cell level code generation: Click Ask AI or press Cmd+K / Ctrl+K to instruct AI to generate code in the active Jupyter Cell. We provide relevant context from the current notebook to the LLM with RAG. You can refer to existing variables in the notebook using the @variable syntax (for dataframes, it will pass the column names to the LLM)
3. Sidebar chat: Clicking the blue Pretzel Icon on the right sidebar opens this chat (Ctrl+Cmd+B / Ctrl+Alt+B). This chat always has context of your current cell or any selected text. Here too, we use RAG to send any relevant context from the current notebook to the LLM
All of these features work out-of-the-box via our “AI Server” but you have the option of using your own OpenAI API Key. This can be configured in the settings (Menu Bar > Settings > Settings Editor > Search for Pretzel). If you use your own OpenAI API Key but don’t have a Mistral API key, be sure to select OpenAI as the inline code completion model in the settings.
These features are just a start. We're building a modern version of Jupyter. Our roadmap includes frictionless, realtime collaboration (think pair-programming, comments, version history), full-fledged SQL support (both in code cells and as a standalone SQL IDE), a visual analysis builder, a VSCode-like coding experience powered by Monaco, and 1-click dashboard creation and sharing straight from your notebooks.
We’d love for you to try Pretzel and send us any feedback, no matter how minor (see my bio for contact info, or file a GitHub issue here: https://github.com/pretzelai/pretzelai/issues)
Ramon here, the other cofounder of Pretzel! Quick update: Based on some early feedback, we're already working on adding support for local LLMs and Claude Sonnet 3.5. Happy to answer any questions!
braintrust-proxy: https://github.com/braintrustdata/braintrust-proxy
LocalAI: https://github.com/mudler/LocalAI
E.g promptfoo and chainforge have multi-LLM workflows.
Promptfoo has a YAML configuration for prompts, providers,: https://www.promptfoo.dev/docs/configuration/guide/
What is the system prompt, and how does a system prompt also bias an analysis?
/? "system prompt" https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Thank you for the links! We'll take a look.
At the moment, the system prompt is hardcoded in code. I don't think it should bias analyses because our goal is usually returning code that does something specific (for eg "calculate normalized rates for column X grouped by column Y") as opposed to something generic ("why is our churn rate going up"). So, the idea would be that the operator is responsible for asking the right questions - the AI merely acts as a facilitator to generate and understand code.
Also tangentially, we do want to allow users some king of local prompt management and allow easy access, potentially via slash commands. In our testing so far, the existing prompt largely works (took us a fair bit of hit-and-trial to find something that mostly works for common data usecases) but we're hitting limitations of a fixed system prompt so these links will be useful.
Supreme Court rules ex-presidents have immunity for official acts
The Constitution reads:
Judgment in Cases of Impeachment shall not extend further than to removal from Office,
[...]
The President, Vice President and all Civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.
How does the comma matter in contracts?This:
"Impeachment for, and Conviction of"
Is distinct in meaning from this: "Impeachment for and Conviction of"
Furthermore: "Judgement in cases of Impeachment"
Is not: "Conviction in cases of Impeachment"
Doesn't this then imply that "Judgement in cases of Impeachment" (i.e. by the Senate) is distinct from "Conviction"?Such would imply that presidents can be Impeached and Judged, and Convicted.
(Furthermore, it clear that the founders' intent was not to create an immune King.)
The Constitution reads:
Judgment in Cases of Impeachment shall not extend further than to removal from Office,
[...]
The President, Vice President and all Civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.
> Such would imply that presidents can be Impeached and Judged, and Convicted.Convicted just as other citizens with Limited Privileges and Immunities.
OPINION: In the US Constitution, removal upon "Impeachment for" is distinct from removal for "Conviction of". Thereby there is removal from office for both: a) Impeachment by the Judgement of the House and Senate, and also by b) Conviction by implied existing criminal procedure for non-immune acts including "Treason, Bribery, or other high Crimes and Misdemeanors."
Conviction is not wrought through Impeachment by the House & Senate, who can only remove from office.
Neither is Arrest Removal from Office, nor is Removal from Office Arrest.
Thereby, a President (like all other citizens) can be Convicted and then Impeached.
That the Executive's own DOJ doesn't prosecute a sitting President is simply a courtesy.
Structure and Interpretation of Classical Mechanics (2015)
I've finally gotten around to reading SICM, but I can't get MIT Scheme to install on the current OSX 14.2. Autoconf requires an old version of the OSX SDK. Has someone else already solved that problem?
I was surprised by how many ways Sussman and Wisdom found to modernise the Landau and Lipshitz treatment. There is the obvious change, where the first time they solve some equations of motion, it's done numerically, and the solution is chaotic.
There is also a more subtle change, where they keep sneaking in the concepts of differential geometry. The word "manifold" is reserved for a footnote, but if you know what tangent spaces and sprays are, it's straightforward to translate the "local tuples" and see what they're actually talking about.
I think this is a good idea. If physics undergraduates were exposed to manifolds and tangent spaces in their analytical mechanics course, then saw some exterior calculus in their first electromagnetism course, they might be ready for curvature and geodesics when they study general relativity.
I don't know if this is useful to you. I haven't tried to install MIT Scheme directly on macOS. Instead, I'm using the sample project at [1] to run a linux VM with Fedora on macOS x86-64. I was able to install MIT Scheme and scmutils on the VM by following the instructions at [2][3][4][5] for CentOS. It appears to have succeeded but I haven't really worked with the system yet.
I haven't tried the instructions for ARM-based macOS at [6] because I'm still on Intel.
It's less convenient to run a VM than to run on macOS directly. But I prefer to sandbox "random" software that way, and some things support linux better than macOS.
[1] https://developer.apple.com/documentation/virtualization/run...
[2] http://groups.csail.mit.edu/mac/users/gjs/6946/
[3] http://groups.csail.mit.edu/mac/users/gjs/6946/installation....
[4] http://groups.csail.mit.edu/mac/users/gjs/6946/mechanics-sys...
[5] https://www.gnu.org/software/mit-scheme/documentation/stable...
[6] http://groups.csail.mit.edu/mac/users/gjs/6946/mechanics-sys...
(You have to install texinfo for the online documentation part of [5]. Skip the 'install-pdf' documentation target if you don't want to depend on tex/latex.)
Does `brew install mit-scheme` work? The homebrew formula says : https://github.com/Homebrew/homebrew-core/blob/8c0bb91eea9c8... :
# Does not build: https://savannah.gnu.org/bugs/?64611
deprecate! date: "2023-11-20", because: :does_not_build
Podman runs Linux containers in a VM in QEMU on MacOS. https://podman.io/docs/installationAsk HN: Built a DB of over 50M+ Org Names for API use. Should it be made public?
TLDR: Built an Internal API that maps over 50M+ Organization Names to their Domains and Logos. Planning to make this API Endpoint Publicly available for other devs to use. Good Idea or Bad???
I run an AI startup where we fine-tune Open Source LLama and Falcon models to turn them into Enterprise Grade Models with longer context windows and better reasoning capabilities. We ended up collecting over 50 Million+ organization data. Recently we came across a use case from one of our customers that they want to use AI to create an auto populating CRM. So right now we have an Internal API that maps all Organization Domains to their Official Names and their Logos. Useful for all those Devs who want to fetch Organization Logos and Organization Names from Domains or vice versa. Should I be converting the API into a Publicly Accessible one for people to use it in their projects?!!??
Yeah, how do you indicate uncertainty in the aigen estimated correspondences? W3C CSVW supports dataset, column, and cell -level metadata. E.g. opencog atomspace hypergraph supports an Attention Value and a Truth Value.
Are there surprising regional and temporal trends in the names?
RDFS specifies a standard vocabulary for classes and subclasses, and properties and sub properties; rdfs:Class , rdfs:Property .
There are schema.org properties on the schema:LocalBusiness class for various business identifiers and other attributes ;
https://schema.org/url : domain
https://schema.org/identifier and subproperties : https://schema.org/duns , https://schema.org/taxID ,
https://schema.org/brand r: https://schema.org/Brand , https://schema.org/Organization
Maybe, a https://schema.org/Dataset :isPartOf a https://schema.org/ScholarlyArticle
SIMD-accelerated computer vision on a $2 microcontroller
> As I've been really interested in computer vision lately, I decided on writing a SIMD-accelerated implementation of the FAST feature detector for the ESP32-S3 [...]
> In the end, I was able to improve the throughput of the FAST feature detector by about 220%, from 5.1MP/s to 11.2MP/s in my testing. This is well within the acceptable range of performance for realtime computer vision tasks, enabling the ESP32-S3 to easily process a 30fps VGA stream.
What are some use cases for FAST?
Features from accelerated segment test: https://en.wikipedia.org/wiki/Features_from_accelerated_segm...
Is there TPU-like functionality in anything in this price range of chips yet?
Neon is an optional SIMD instruction set extension for ARMv7 and ARMv8; so Pi Zero and larger have SIMD extensions
Orrin Nano have 40 TOPS, which is sufficient for Copilot+ AFAIU. "A PCIe Coral TPU Finally Works on Raspberry Pi 5" https://news.ycombinator.com/item?id=38310063
From https://phys.org/news/2024-06-infrared-visible-device-2d-mat... :
> Using this method, they were able to up-convert infrared light of wavelength around 1550 nm to 622 nm visible light. The output light wave can be detected using traditional silicon-based cameras.
> "This process is coherent—the properties of the input beam are preserved at the output. This means that if one imprints a particular pattern in the input infrared frequency, it automatically gets transferred to the new output frequency," explains Varun Raghunathan, Associate Professor in the Department of Electrical Communication Engineering (ECE) and corresponding author of the study published in Laser & Photonics Reviews.
"Show HN: PicoVGA Library – VGA/TV Display on Raspberry Pi Pico" https://news.ycombinator.com/item?id=35117847#35120403 https://news.ycombinator.com/item?id=40275530
"Designing a SIMD Algorithm from Scratch" https://news.ycombinator.com/item?id=38450374
Thanks for reading!
> What are some use cases for FAST?
The FAST feature detector is an algorithm for finding regions of an image that are visually distinctive, which can be used as a first step in motion tracking and SLAM (simultaneous localization and mapping) algorithms typically seen in XR, robotics, etc.
> Is there TPU-like functionality in anything in this price range of chips yet?
I think that in the case of the ESP32-S3, its SIMD instructions are designed to accelerate the inference of quantized AI models (see: https://github.com/espressif/esp-dl), and also some signal processing like FFTs. I guess you could call the SIMD instructions TPU-like, in the sense that the chip has specific instructions that facilitates ML inference (EE.VRELU.Sx performs the ReLU operation). Using these instructions will still take away CPU time where TPUs are typically their own processing core, operating asynchronously. I’d say this is closer to ARM NEON.
SimSIMD https://github.com/ashvardanian/SimSIMD :
> Up to 200x Faster Inner Products and Vector Similarity — for Python, JavaScript, Rust, C, and Swift, supporting f64, f32, f16 real & complex, i8, and binary vectors using SIMD for both x86 AVX2 & AVX-512 and Arm NEON & SVE
github.com/topics/simd: https://github.com/topics/simd
https://news.ycombinator.com/item?id=37805810#37808036
From gh-topics/SIMD:
SIMDe: SIMD everywhere: https://github.com/simd-everywhere/simde :
> The SIMDe header-only library provides fast, portable implementations of SIMD intrinsics on hardware which doesn't natively support them, such as calling SSE functions on ARM. There is no performance penalty if the hardware supports the native implementation (e.g., SSE/AVX runs at full speed on x86, NEON on ARM, etc.).
> This makes porting code to other architectures much easier in a few key ways:
Gold nanoparticles kill cancer – but not as thought
Star-shaped gold nanoparticles up to 200nm kill at least some forms of cancer;
> Spherical nanoparticles of 10 nanometers in size, produced in Cracow, turned out to be practically harmless to the glioma cell line studied. However, high mortality was observed in cells exposed to nanoparticles as large as 200 nanometers, but with a star-shaped structure. [...]
> "We used the data from the Cracow experiments to build a theoretical model of the process of nanoparticle deposition inside the cells under study. The final result is a differential equation into which suitably processed parameters can be substituted—for the time being only describing the shape and size of nanoparticles—to quickly determine how the uptake of the analyzed particles by cancer cells will proceed over a given period of time," says Dr. Pawel Jakubczyk, professor at the UR and co-author of the model.
> He emphasizes, "Any scientist can already use our model at the design stage of their own research to instantly narrow down the number of nanoparticle variants requiring experimental verification."
> The ability to easily reduce the number of potential experiments to be carried out means a reduction in the costs associated with the purchase of cell lines and reagents, as well as a marked reduction in research time (it typically takes around two weeks just to culture a commercially available cell line). In addition, the model can be used to design better-targeted therapies than before—ones in which the nanoparticles will be particularly well absorbed by selected cancer cells, while maintaining relatively low or even zero toxicity to healthy cells in the patient's other organs.
"Modeling Absorption Dynamics of Differently Shaped Gold Glioblastoma and Colon Cells Based on Refractive Index Distribution in Holotomographic Imaging" (2024) https://onlinelibrary.wiley.com/doi/10.1002/smll.202400778
Holotomography: https://en.wikipedia.org/wiki/Holotomography :
> Holotomography (HT) is a laser technique to measure the three-dimensional refractive index (RI) tomogram of a microscopic sample such as biological cells and tissues. [...] In order to measure 3-D RI tomogram of samples, HT employs the principle of holographic imaging and inverse scattering. Typically, multiple 2D holographic images of a sample are measured at various illumination angles, employing the principle of interferometric imaging. Then, a 3D RI tomogram of the sample is reconstructed from these multiple 2D holographic images by inversely solving light scattering in the sample.
Could part of the cancer-killing nanoparticle star be more magnetic than Gold (Au), for targeting treatment ?
Is it only gold stars that do it?
Gold; AU; #79: https://en.m.wikipedia.org/wiki/Gold :
> Chemically, gold is a transition metal, a group 11 element, and one of the noble metals. It is one of the least reactive chemical elements, being the second-lowest in the reactivity series.
Is the effect entirely mechanical stress to the cell and cell membrane and/or oxidative stress?
(what were earlier oxygen levels on earth during the evolution of mammals),
Do highly-unreactive gold nanoparticles carry electronic, photonic, and sonic phononic charge?
Is there additional treatment value in [NIRS-targeted] ultrasound to charge the [magnetic] gold nanoparticles?
How does the resistivity of the cell membrane change with and without star-shaped gold nanoparticles?
> Is the effect entirely mechanical stress to the cell and cell membrane and/or oxidative stress?
The article appears to indicate the "tips" of the star shape are mechanically responsible for causing damage widespread enough to trigger cell-death.-
PS. This article needs to be widely read.-
PSS. Incidentally, the form of microscopy used by the researchers appears to be equaly novel and a *gamechanger*.-
Chrome is adding `window.ai` – a Gemini Nano AI model right inside the browser
This doesn't seem useful unless it's something standardized across browsers. Otherwise I'd still need to use a plugin to support safari, etc.
It seems like it could be nice for something like a bookmarklet or a one-off script, but I don't think it'll really reduce friction in engaging with Gemini for serious web apps.
"WebNN: Web Neural Network API" https://news.ycombinator.com/item?id=36158663 :
> - Src: https://github.com/webmachinelearning/webnn
W3C Candidate Recommendation Draft:
> - Spec: https://www.w3.org/TR/webnn/
> WebNN API: https://www.w3.org/TR/webnn/#api :
>> 7.1. The `navigator.ml` interface
>> webnn-polyfill
E.g. Promptfoo, ChainForge, and LocalAI all have abstractions over many models; also re: Google Desktop and GNU Tracker and NVIDIA's pdfgpt: https://news.ycombinator.com/item?id=39363115
promptfoo: https://github.com/promptfoo/promptfoo
ChainForge: https://github.com/ianarawjo/ChainForge
LocalAI: https://github.com/go-skynet/LocalAI
Supreme Court overturns 40-year-old "Chevron deference" doctrine
Congress can actually legislate the right of agencies to interpret the gaps in the laws back into effect - by passing a law that explicitly gives agencies this power.
Just like congress can legislate abortion laws rather than leaving it to judicial precedence.
Fundamentally there’s nothing wrong with the position of supreme court to push the responsibility of lawmaking back on congress.
Where I'm from, our "supreme court" can overthrow congress legislation for not following the constitution. Is this the case here AND is this the case in the US (generally speaking)?
Yes, that's exactly how it works. The US Supreme Court can rule a law unconstitutional, and that's that.
Little known fact: Congress can actually legislate around that by removing the possibility of judicial review from the law itself.
Little known, I suspect, on account of being entirely false.
> on account of being entirely false
Congress can absolutely limit judicial review by statute. (It can’t remove it entirely.)
Shouldn't that require a Constitutional Amendment?
Such a law would bypass Constitutional Separation of Powers (with limited privileges and immunities) i.e. checks and balances.
Why isn't the investigative/prosecutorial branch distinct from the executive and judicial branches though?
> Shouldn't that require a Constitutional Amendment?
No, Article III § 1 explicitly vests judicial power “in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish” [1].
> Why isn't the investigative/prosecutorial branch distinct from the executive and judicial branches though?
What do you think executing laws means?
[1] https://constitution.congress.gov/constitution/article-3/#ar...
No, to change the separation of powers they need a constitutional amendment because that's a change to the Constitution, and amendments are the process for changing the Constitution.
To interpret what was meant by Liberty and Equality as values, as a strict constructionist.
> to change the separation of powers they need a constitutional amendment because that's a change to the Constitution
The Constitution literally says the Congress has the power to establish inferior courts. Congress setting what is justifiable is highly precedented.
The words “separation of powers” never appear in the Constitution. It’s a phrase used to describe the system that document establishes.
Can Congress grant rights? No, because persons have natural rights, inalienable rights; and such enumeration of the rights of persons occurs only in the Declaration, which - along with the Articles of Confederation - frames the intent, spirit, and letter of the Constitution ; which itself very specifically limits the powers of the government and affords a process of amendment wit quorum for changes to such limits of the government in law.
Congress may not delegate right-granting privileges because the legislature hasn't right-granting privileges itself.
The Constitution is very clear that there are to be separate branches; each with limited privileges and immunities, and none with the total immunity of a Tyrant king.
A system of courts to hear offenses per the law determined by the federal and state legislatures with a Federal Constitutional Supremacy Clause, a small federal government, a federal minarchy, and a state divorce from British case law precedent but not common law or Natural Rights.
And so the Constitution limits the powers of each branch of government, and to amend the Constitution requires an amendment.
Why shouldn't we all filibuster court nominations?
Without an independent prosecutor, Can the - e.g. foreign-installed or otherwise fraudulent - executive obstruct DOJ investigations of themselves that conclude prior to the end of their term by terminating a nominated and confirmed director of an executive DOJ department, install justices with with his signature, and then pardon themselves and their associates?
The Court can or will only hear matters of law. Congress can impune and impeach but they're not trained as prosecutors either; so which competent court will hear such charges? Did any escape charges for war crimes, tortre without due process, terror and fear? Whose former counsel on the court now.
What delegations of power, duties, and immunities can occur without constitutional amendment?
Who's acting president today? Where's your birth certificate? You're not even American.
What amendments could we have?
1. You cannot pardon yourself, even as President. Presidents are not granted total immunity (as was recently claimed before the court), they are granted limited Privileges and Immunities.
2. Term limits for legislators, judges, and what about distinguished public/civil servants who pick expensive fights for the rest of us to fight and pay for? You sold us to the banks. Term limits all around.
3. Your plan must specify investment success and failure criteria. (Plan: policy, legislative bill, program, schedule,)
Can Congress just delegate privileges - for example, un-equal right-granting privileges - without an Amendment, because there is to be a system of lower courts?
Additional things that the Constitution, written in the 1770s, doesn't quite get, handle, or address:
US contractors operating abroad on behalf of the US government must obey US government laws while operating abroad. This includes "torture interrogation contractors" hired by an illegally-renditioning executive.
The Federal and State governments have contracted personal defense services to a privately-owned firm. Are they best legally positioned to defend, and why are they better funded than the military?
What prevents citizens from running a debtor blackmail-able fool - who is 35 and an American citizen - for president and puppeting them remotely?
Too dangerous to gamble.
Executive security clearance polices are determined by the actual installed executive; standard procedure was: tax return, arrest record, level of foreign debt.
Would a president be immune for slaving or otherwise aggravatedly human trafficking a vengeful, resentful prisoner on release who intentionally increases expenses and cuts revenue?
Did their regional accent change after college?
Can it be proven that nobody was remoting through anybody? No, it cannot.
And what about installs ostensibly to protect children in the past being used misappropriatingly for political harassment, intimidation, and blackmail? How should the court address such a hypothetical "yesterday" capability which could be used to investigate but also to tamper with and obstruct? Why haven't such capabilities been used to defend America from all threats foreign and domestic, why are there no countermeasure programs for such for chambers of justice and lawmaking and healthcare at least.
And what about US Marshalls or other protective services with witness protection reidentification authorization saboteurially "covering" for actual Candidate-elects?
Can a president be witness protected - i.e. someone else assumes their identity and assets - one day before or one day after an election? Are Justices protected from such fraud and identity theft either?
You're not even American.
And what about when persons are assailed while reviewing private, sensitive, confidential, or classified evidence; does such assault exfiltrate evidence to otherwise not-closed-door hearings and investigations?
Which are entitled to a private hearing?
Shouldn't prosecute tortuous obstruction? Or should we weakly refuse writ; and is there thus no competent authority (if nobody prosecutes torture and other war crimes)?
Let's all pay for healthcare for one another! Let's all pay for mental healthcare in the United States. A War on Healthcare!
Are branches of government prohibited from installing into, prosecuting, or investigating other branches of government; are there any specific immunities for any officials in any branch in such regard?
Sorry, it's not your fault either.
New ways to catch gravitational waves
- "Kerr-enhanced optical spring for next-generation gravitational wave detectors" (2024) https://news.ycombinator.com/item?id=39957123
- "Physicists Have Figured Out a Way to Measure Gravity on a Quantum Scale" with a superconducting magnetic trap made out of Tantalum (2024) https://news.ycombinator.com/item?id=39495482
Ask HN: How to reuse waste heat and water from AI datacenters?
> Ask HN: How to reuse waste heat and water from AI datacenters?
- Steam Turbine
- Thermoelectrics (solid state)
- Gravel Battery, Sand Battery (solid state)
- OTEC: Ocean Thermal Energy Conversion (needs a 20C gradient)
- Water Sterilization
- Heat Pipe contract
- Commercial and Residential HVAC Heat
- Air Filter Production (CO2, Graphene, Heat)
- Water Filter Production (CO2, Graphene, Heat)
- Onsite manufacturing with Carbon & Heat (See: Datacenter Inputs)
- Carbon-based Chip Production (2D Graphene and other forms of Carbon)
- Thermoforming products (e.g. out of now easily-captured CO2 to Graphene and other forms of Carbon)
- Geopolymer production: pads to mount racks on, bricks, radially-curved outwall blocks
- Open-Source CFD: Computational Fluid Dynamics; use the waste heat to pay for CFD HPC time to reduce the waste heat with better materials, better facility design with passive thermofluidics and energy reclamation
- Datacenter Inputs: [Earthmoving, Infrastructure], Water, Air, Concrete, Beams, Electricity, Turbines, Bolts, Racks, Fans, PCs, Copper power Wires and data Cabling, Bandwidth, Fiber Cabling
- Datacenter Outputs: Boiled Water, Warm Air, Data, recyclable building materials, Electricity
> Ask HN: How to reuse waste heat and water from AI datacenters?
> - Steam Turbine
> - Thermoelectrics
> - Gravel Battery, Sand Battery
> - OTEC: Ocean Thermal Energy Conversion (needs a 20C gradient)
"Ask HN: Does OTEC work with datacenter heat, or thermoelectrics?" https://news.ycombinator.com/item?id=40821522
> - Water Sterilization
FWIU most datacenters waste the sterilized water as steam, which has some evaporative cooling value?
In which markets is it already possible for datacenters [and other industrial production facilities] to sell or give sterilized water to the water supply utility?
"Ask HN: Methods for removing parts per trillions of PFAS from water?" (2024) https://news.ycombinator.com/item?id=40036764 ; RO: Reverse Osmosis, Zwitterionic hydrogels (that are reusable if washed with ethanol)
> - Heat Pipe contract
E.g. Copenhagen Atomics wants to sell a heat pipe contract promise and manage the (Thorium+ waste burner) heat production.
It takes extra heat to pump heat through a tube underground to a building next door, but it is worth it.
> - Commercial and Residential HVAC Heat
The OTEC Wikipedia article mentions Ammonia as a Working fluid (a thermofluid), but Ammonia.
Sustainable thermofluids for heat pumps would be a good use for materials design AI.
"AI Designs Magnet Free of Rare-Earth Metals in Just 3 Months" (2024) https://news.ycombinator.com/item?id=40775959
> - Air Filter Production (CO2, Graphene, Heat)
> - Water Filter Production (CO2, Graphene, Heat)
> - Onsite manufacturing with Carbon & Heat (See: Datacenter Inputs)
Also meet ASTM spec for which products produced onsite.
> - Carbon-based Chip Production (2D Graphene and other forms of Carbon)
"Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024) https://news.ycombinator.com/item?id=40719725
> - Thermoforming products (e.g. out of now easily-captured CO2 to Graphene and other forms of Carbon)
> - Geopolymer production: pads to mount racks on, bricks, radially-curved outwall blocks
Is this new peptide glass a functional substitute for "water glass" in other geopolymer formulas? "Self-healing glass from a simple peptide – just add water" (2024) https://news.ycombinator.com/item?id=40677095
> - Open-Source CFD: Computational Fluid Dynamics; use the waste heat to pay for CFD HPC time to reduce the waste heat with better materials, better facility design with passive thermofluidics and energy reclamation
Quantum CFD. https://westurner.github.io/hnlog/#comment-39954180 Ctrl-F CFD, SQS
> - Datacenter Inputs: [Earthmoving, Infrastructure], Water, Air, Concrete, Beams, Electricity, Turbines, Bolts, Racks, Fans, PCs, Copper power Wires and data Cabling, Bandwidth, Fiber Cabling
Biocomposites and Titanium are strong enough for beams and fan blades; though on corrosion due to water, titanium probably wins. "Cheap yet ultrapure titanium metal might enable widespread use in industry" (2024) https://news.ycombinator.com/item?id=40768549
Is there a carbon-based copper alternative for cabling that's flexible enough to work at CAT-5/6/7/8 ethernet cable; 40GBase-T at 2000 Mhz?
There appear to be electron vortices and superconductivity in carbon at room temperature.
Can we replace copper with carbon in cables and computers?
> - Datacenter Outputs: Boiled Water, Warm Air, Data, recyclable building materials, Electricity
- "Ask HN: Why don't datacenters have passive rooflines like Net Zero homes?" https://news.ycombinator.com/item?id=37579505
Ask HN: Does OTEC work with datacenter heat, or thermoelectrics?
OTEC > Thermodynamic efficiency: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio... :
> A heat engine gives greater efficiency when run with a large temperature difference. In the oceans the temperature difference between surface and deep water is greatest in the tropics, although still a modest 20 to 25 °C. It is therefore in the tropics that OTEC offers the greatest possibilities.[4] OTEC has the potential to offer global amounts of energy that are 10 to 100 times greater than other ocean energy options such as wave power. [44][45]
> OTEC plants can operate continuously providing a base load supply for an electrical power generation system. [4]
> The main technical challenge of OTEC is to generate significant amounts of power efficiently from small temperature differences. It is still considered an emerging technology. Early OTEC systems were 1 to 3 percent thermally efficient, well below the theoretical maximum 6 and 7 percent for this temperature difference.[46] Modern designs allow performance approaching the theoretical maximum Carnot efficiency.
Thermoelectric power: https://en.wikipedia.org/wiki/Thermoelectric_power
Google's new quantum computer may help us understand how magnets work
- "AI Designs Magnet Free of Rare-Earth Metals in Just 3 Months" (2024) https://news.ycombinator.com/item?id=40775959
Tunable entangled photon-pair generation in a liquid crystal
From https://news.ycombinator.com/item?id=40589972 :
> Observation of Bose–Einstein condensation of dipolar molecules" (2024) https://www.nature.com/articles/s41586-024-07492-z :
> Abstract: [...] This work opens the door to the exploration of dipolar quantum matter in regimes that have been inaccessible so far, promising the creation of exotic dipolar droplets, self-organized crystal phases and dipolar spin liquids in optical lattices.
> Given that each molecule is in an identical, known state, they could be separated to form quantum bits, or qubits, the units of information in a quantum computer. The molecules’ quantum rotational states — which can be used to store information — can remain robust for perhaps minutes at a time, allowing for long and complex calculations
Microsoft breached antitrust rules by bundling Teams and Office, EU says
Did they prevent OEMs from tampering with the system install image by adding apps that compete with Teams or from removing Teams?
Are they fascistly on notice for market cap, not antitrust offense?
Is the remedy that EU dominates MS until their market share is acceptable, or did MS prevent others from installing competing apps on the OS they and others distribute?
In the US, no bundling applies to ski resorts working in concert with other ski resorts. Materially, did MS prevent OEMs or users from installing competing apps?
HyperCard Simulator
As someone who’s too young to have been around for HyperCard, what was the main draw? Was it the accessibility of the tech or was it just really well executed?
https://en.wikipedia.org/wiki/HyperCard :
> It is among the first successful hypermedia systems predating the World Wide Web.
But HyperCard was not ported to OSX, now MacOS; which has Slides and TextEdit for HTML without MacPorts, or Brew,.
HTML is Hypertext because it has edges and scripting IIUC.
Instead of the DOM and JS addEventHandler, with React/preact you call setState(attr, val) to mutate the application state dict/object so that Components can register to be updated when keys in the state dict are changed with useState() https://react.dev/learn/adding-interactivity#responding-to-e...
The HyperTalk language has a fairly simple grammar compared to creating a static SPA (Single Page Application) with all of JS and e.g React and a router to support the browser back button and deeplink bookmarks with URI fragments that "don't break the web": https://github.com/antlr/grammars-v4/blob/master/hypertalk/H...
TodoMVC or HyperTalk? False dilemma.
More Memory Safety for Let's Encrypt: Deploying ntpd-rs
Unlike say, coreutils, ntp is something very far from being a solved problem and the memory safety of the solution is unfortunately going to play second fiddle to its efficacy.
For example, we only use chrony because it’s so much better than whatever came with your system (especially on virtual machines). ntpd-rs would have to come at least within spitting distance of chrony’s time keeping abilities to even be up for consideration.
(And I say this as a massive rust aficionado using it for both work and pleasure.)
The biggest danger in NTP isn't memory safety (though good on this project for tackling it), it's
(a) the inherent risks in implementing a protocol based on trivially spoofable UDP that can be used to do amplification and reflection
and
(b) emergent resonant behavior from your implementation that will inadvertently DDOS critical infrastructure when all 100m installed copies of your daemon decide to send a packet to NIST in the same microsecond.
I'm happy to see more ntpd implementations but always a little worried.
I really wish more internet infrastructure would switch to using NTS. It addresses these kinds of issues.
Never heard of it. Shockingly little on wikipedia for example.
Yeah. Seems it doesn’t even have its own article there.
Only a short mention in the main article about NTP itself:
> Network Time Security (NTS) is a secure version of NTPv4 with TLS and AEAD. The main improvement over previous attempts is that a separate "key establishment" server handles the heavy asymmetric cryptography, which needs to be done only once. If the server goes down, previous users would still be able to fetch time without fear of MITM. NTS is currently supported by several time servers, including Cloudflare. It is supported by NTPSec and chrony.
"RFC 8915: Network Time Security for the Network Time Protocol" (2020) https://www.rfc-editor.org/rfc/rfc8915.html
"NTS RFC Published: New Standard to Ensure Secure Time on the Internet" (2020) https://www.internetsociety.org/blog/2020/10/nts-rfc-publish... :
> NTS is basically two loosely coupled sub-protocols that together add security to NTP. NTS Key Exchange (NTS-KE) is based on TLS 1.3 and performs the initial authentication of the server and exchanges security tokens with the client. The NTP client then uses these tokens in NTP extension fields for authentication and integrity checking of the NTP protocol messages that exchange time information.
From "Simple Precision Time Protocol at Meta" https://news.ycombinator.com/item?id=39306209 :
> How does SPTP compare to CERN's WhiteRabbit, which is built on PTP [and NTP NTS]?
White Rabbit Project: https://en.wikipedia.org/wiki/White_Rabbit_Project
New device uses 2D material to up-convert infrared light to visible light
Use case: NIRS Near-Infrared Spectroscopy.
https://news.ycombinator.com/item?id=40332064 :
> predict e.g. [portable] low-field MRI with NIRS Infrared and/or Ultrasound?
> Idea: Do sensor fusion with all available sensors timecoded with landmarks, and then predict the expensive MRI/CT from low cost sensors [such as this coherent upconverter]
> Are there implied molecular structures that can be inferred from low-cost {NIRS, Light field, [...]} sensor data?
> Task: Learn a function f() such that f(lowcost_sensor_data) -> expensive_sensor_data
> FWIU OpenWater has moved to NIRS+Ultrasound for ~ live in surgery MRI-level imaging and now treatment?
> FWIU certain Infrared light wavelengths cause neuronal growth; and Blue and Green inhibit neuronal growth.
AI Is Accelerating the Loss of Our Scarcest Natural Resource: Water
Sounds like the first problem super-intelligence (derived of this earth) should resolve is how to keep this system in dynamic equilibrium. Or, it should unwind itself if the solution is the problem.
"The Implications of Ending Groundwater Overdraft for Global Food Security" https://news.ycombinator.com/item?id=40686383
Computers made out of superconductors and Graphene instead of Copper would waste much less heat and water. "Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" https://news.ycombinator.com/item?id=40719725
Passive thermal design of data centers could reduce waste, too. Would heat vent out of server racks better if the servers were mounted vertically instead of like pizza boxes trapping heat?
> - "Desalination system could produce freshwater that is cheaper than tap water" (2023) : "Extreme salt-resisting multistage solar distilation with thermohaline convection" (2023) https://news.ycombinator.com/item?id=39999225
Most datacenters are not yet able to return their steamed and boiled water back to the water supply for treatment. Datacenters that process and steam sanitize water would rely upon the water utility company to then also remove PFAs.
Does OTEC work in datacenters; would that work eat some of the heat? https://news.ycombinator.com/item?id=38222695
AI Designs Magnet Free of Rare-Earth Metals in Just 3 Months
> According to the makers of MagNex, compared with conventional magnets, the material costs are 20 percent what they would otherwise be, and there's also a 70 percent reduction in material carbon emissions.
> "In the electric vehicle industry alone, the demand for rare-earth magnets is expected to be ten times the current level by 2030*
Electric fields boost graphene's potential, study shows
SSH as a Sudo Replacement
My main objection to this is just the added complexity. Instead of a single suid binary that reads a config file and calls exec(), now you have one binary that runs as root and listens on a UNIX socket, and another that talks to a UNIX socket; both of them have to do asymmetric crypto stuff.
It seems like the main argument against sudo/doas being presented is that you have a suid binary accessible to any user, and if there's a bug in it, an unauthorized user might be able to use it for privilege escalation. If that's really the main issue, then you can:
chgrp wheel /usr/bin/sudo
chmod o-rwx /usr/bin/sudo
Add any sudoers to the wheel group, and there you go: only users that can sudo are allowed to even read the bytes of the file off disk, let alone execute them. This essentially gives you the same access-related security as the sshd approach (the UNIX socket there is set up to be only accessible to users in wheel), with much much much less complexity.And since the sshd approach doesn't allow you to restrict root access to only certain commands (like sudo does), even if there is a bug in sudo that allows a user to bypass the command restrictions, that still gives no more access than the sshd approach.
If you are worried about your system package manager messing up the permissions on /usr/bin/sudo, you can put something in cron to fix them up that runs every hour or whatever you're comfortable with. Or you can uninstall sudo entirely, and manually install it from source to some other location. Then you have to maintain and upgrade it, manually, of course, unfortunately.
If you could configure your linux kernel without suid support, that would be huge benefit for security, IMO. suid feature is huge security hole.
Whether fighting one particular suid binary worth it, is questionable indeed. But this is good direction. Another modern approach to this problem is run0 from systemd.
"List of Unix binaries that can be used to bypass local security restrictions" (2023) https://news.ycombinator.com/item?id=36637980
"Fedora 40 Plans To Unify /usr/bin and /usr/sbin" (2024) https://news.ycombinator.com/item?id=38757975 ; a find expression to locate files with the setuid and setgid bits, setcap,
man run0: https://www.freedesktop.org/software/systemd/man/devel/run0....
Asynchronous Consensus Without Trusted Setup or Public-Key Cryptography
Their protocol only uses cryptographic hash functions, which means that it's post-quantum secure. One reason why this is significant is because existing post-quantum public key algorithms such as SPHINCS+ use much larger keys than classic public key algorithms such as RSA or ECDH.
*Edit: As other have pointed out, for SPHINCS+ it's the signature size and not the key size that's significantly larger.
Only PQ hash functions are PQ FWIU. Other cryptographic hash functions are not NIST PQ standardization finalists; I don't think any were even submitted with "just double the key/hash size for the foreseeable future" as a parameter as is suggested here. https://news.ycombinator.com/item?id=25009925
NIST Post-Quantum Cryptography Standardization > Round 3 > Selected Algorithms 2022 > Hash based > SPHINCS+: https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...
SPHINCS+ is a hash based PQ Post Quantum (quantum resistant) cryptographic signature algorithm.
SPHINCS+: https://github.com/sphincs/sphincsplus
For those here who don’t know
SPHINCS is essentially Lamport Signatures and SPHINCS+ is essentially removing the need to store “state” by using chains of hashes and a random oracle
https://crypto.stackexchange.com/questions/54304/difference-...
I prefer tham to lattice-based methods because it seems to me that cryptographic hashes (and other trapdoor functions) are more likely to be quantum-resistant than lattices (for which an algorithm like Shor’s algorithm merely hasn’t been found yet).
In case you haven't seen, there was actually a quantum algorithm on preprint recently for solving some class of learning with errors problems in polynomial time. https://eprint.iacr.org/2024/555
Framework: A new RISC-V Mainboard from DeepComputing
What is the battery life like; maybe in ops/kwhr? https://news.ycombinator.com/item?id=40719630
The JH7110 CPU and 8 GB RAM on e.g. the VisionFive 2 uses about 5W of power at full load.
Battery life is going to be determined by your screen brightness, not the CPU.
From the reddit link in my this flgged (?) comment:
> About three years ago, I came across information that the RISC-V U74 core had roughly 1.8 times lower performance per clock compared to the ARM Cortex-A53.
Are there other advantages?
What is the immediate marginal cost of an ARM license per core(s)?
Likely to do with such information being unsourced, and contradicting every documented benchmark.
U74 does very easily outdo A53 in performance, power and area.
Versioning FreeCAD Files with Git (2021)
As text/code-based formats, e g. cadquery and the newer build123d work with text-based VCS systems like git.
Is there a diff tool for FreeCAD?
nbdime: https://nbdime.readthedocs.io/en/latest/#git-integration-qui... :
> Git doesn’t handle diffing and merging notebooks very well by default, but you can configure git to use nbdime:
nbdime config-git --enable --global
Oh, zippey, for zip archives of text files in git: https://bitbucket.org/sippey/zippey/src/master/Are there tools to test/validate a FreeCAD model to check for regressions over a range of commits?
git bisect might then be useful with FreeCAD models; https://www.git-scm.com/docs/git-bisect :
> This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a "bad" commit that is known to contain the bug, and a "good" commit that is known to be before the bug was introduced. Then git bisect picks a commit between those two endpoints and asks you whether the selected commit is "good" or "bad". It continues narrowing down the range until it finds the exact commit that introduced the change.
FWIW, LEGO Bricktales does automated design validation as part of the game. Similarly, what CI job(s) run when commits are pushed to a Pull Request git branch?
yes, i built this to solve that problem: https://github.com/khimaros/freecad-build
Which visual differences are occluded depending on the design and CAD software parameters?
If [FreeCAD] files are text-diffable, is there a way to topologically sort (with secondary insertion order) before git committing with e.g. precommit?
python has a sort_keys parameter for serialization, in order to alphabetically sort sibling nodes in nested datastructures; which makes it easier to diff json documents that have different key orderings but otherwise equal content.
martinvoz/jj: https://github.com/martinvonz/jj :
> Working-copy-as-a-commit: Changes to files are recorded automatically as normal commits, and amended on every subsequent change. This "snapshot" design simplifies the user-facing data model (commits are the only visible object), simplifies internal algorithms, and completely subsumes features like Git's stashes or the index/staging-area.
You'll need to put down ~35%, or almost $128,000, to afford a typical home
States That Have Passed Universal Free School Meals (So Far)
"Healthy, Hunger-Free Kids Act of 2010": https://en.wikipedia.org/wiki/Healthy,_Hunger-Free_Kids_Act_...
School meal programs in the United States: https://en.wikipedia.org/wiki/School_meal_programs_in_the_Un...
Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?
For transfering data it would be in theory possible. I haven't heard anything about semiconducting properties in graphene or other forms of carbon.
Minimally, a graphene based classical (or quantum) computer would require transistors and traces on a carbon wafer, from which NAND gates and better could be constructed.
FWIU there is certainly semiconductivity in graphene and there is also superconductivity in graphene at room temperature.
And traces can be lased into clothing, fruit peels, and probably carboniferous wafers.
Which electronic components cannot be made from graphene or other forms of carbon?
[deleted]
> semiconductivity in graphene
"Ultrahigh-mobility semiconducting epitaxial graphene on silicon carbide" (2024-01) https://www.nature.com/articles/s41586-023-06811-0 https://research.gatech.edu/feature/researchers-create-first...
> superconductivity in graphene at room temperature
From "Why There's a Copper Shortage" [...] https://news.ycombinator.com/item?id=40542826 re: the IEF report "Copper Mining and Vehicle Electrification" (2024) https://www.ief.org/focus/ief-reports/copper-mining-and-vehi... :
>> Graphene is an alternative to copper
>> In graphene, there is electronic topology: current, semiconductivity, room temperature superconductivity
>> - "Global Room-Temperature Superconductivity in Graphite" (2023) https://onlinelibrary.wiley.com/doi/10.1002/qute.202300230
>> - "Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691
- [Superconducting nanowire] "Photon Detectors Rewrite the Rules of Quantum Computing" https://news.ycombinator.com/item?id=39537329 ; functionalized carbon nanotubes work as single photon sources
- From "High-sensitivity terahertz detection by 2D plasmons in transistors" https://news.ycombinator.com/item?id=38800406 .. "Electrons turn piece of wire into laser-like light source" (2023) https://news.ycombinator.com/item?id=33490730 :
> "Coherent Surface Plasmon Polariton Amplification via Free Electron Pumping" (2023) https://doi.org/10.21203/rs.3.rs-1572967/v1 ; Surface plasmons and photolithography and imaging electrons within dielectrics, emitter but probably not yet with graphene or other carbon?
I think its not the problem of if, but of how. graphene is delicate and can easily break. The manufacturing process is so hard that its still cheaper to use silicon to make chips
Nanoassembled multilayer graphene plus structure is probably already doable?
- "Towards Polymer-Free, Femto-Second Laser-Welded Glass/Glass Solar Modules" (2024) https://ieeexplore.ieee.org/document/10443029 .. "Femtosecond Lasers Solve Solar Panels' Recycling Issue" https://news.ycombinator.com/item?id=40325433
- "A self-healing multispectral transparent adhesive peptide glass" (2024) https://www.nature.com/articles/s41586-024-07408-x .. "Self-healing glass from a simple peptide – just add water" https://news.ycombinator.com/item?id=40677095
- "High-strength, lightweight nano-architected silica" (2023) https://www.sciencedirect.com/science/article/pii/S266638642... .. "A New Wonder Material Is 5x Lighter—and 4x Stronger—Than Steel" https://www.popularmechanics.com/science/a44725449/new-mater... ..
From https://news.ycombinator.com/item?id=38253573 :
> maybe glass on quantum dots in synthetic DNA, and then still wave function storage and transmission; scale the quantum interconnect
From https://news.ycombinator.com/item?id=38608859 re: 2DPA-1 polyamide :
>> However, in the new study, Strano and his colleagues came up with a new polymerization process that allows them to generate a two-dimensional sheet called a polyaramide. For the monomer building blocks, they use a compound called melamine, which contains a ring of carbon and nitrogen atoms. Under the right conditions, these monomers can grow in two dimensions, forming disks. These disks stack on top of each other, held together by hydrogen bonds between the layers, which make the structure very stable and strong.
If not glass or polyamide 2DPA-1,
Maybe aerogels? Can high-carbon aerogels be lased into multilayer graphene circuits?
In addition to nanolithography and nanoassembly, there is 3d printing with graphene.
From https://all3dp.com/2/graphene-3d-printing-can-it-be-done/ :
> However, because the atomic bonds are only in the lateral direction, it can only exist in 2D. Once multiple layers of graphene are stacked one onto each other, you get graphite, which has relatively poor properties. This means 3D printing pure graphene is impossible. A 3D structure is only possible when graphene is mixed with a binder.
There are so many non-graphene forms of carbon.
Is peptide glass a suitable binder for multilayered graphene for semiconductor and superconductor computing?
Do or can aerogels already contain binder?
Reconstructing Public Keys from Signatures
Fun fact: Ethereum transaction does not include sender's address or pubkey.
It is calculated from the signature.
I'm not sure if Bitcoin can use this trick, at least the classic transaction types explicitly included pubkey.
Electromechanical Lunar Lander
New algorithm discovers language just by watching videos
It's a stretch to call this discovering language. It's learning the correlations between sounds, spoken words and visual features. That's a long way from learning language.
I don’t think that you are wrong but what do you mean by language that can’t covered by p(y|x)?
I think he's wrong with the same logic. If sound, visual and meaning aren't correlated, what can language be? This looks like an elegant insight by Mr Hamilton and very much in line with my personal prediction that we're going to start encroaching on a lot of human-like AI abilities when it is usual practice to feed them video data.
There is a chance that abstract concepts can't be figured out visually ... although that seems outrageously unlikely since all mathematical concepts we know of are communicated visually and audibly. If it gets more abstract than math that'll be a big shock.
Linguistic relativity: https://en.m.wikipedia.org/wiki/Linguistic_relativity :
> The idea of linguistic relativity, known also as the Whorf hypothesis, [the Sapir–Whorf hypothesis], or Whorfianism, is a principle suggesting that the structure of a language influences its speakers' worldview or cognition, and thus individuals' languages determine or influence their perceptions of the world.
Does language fail to describe the quantum regime with which we could have little intuition? Verbally and/or visually, sufficiently describe the outcome of a double-slit photonic experiment onto a fluid?
Describe the operator product of (qubit) wave probability distributions and also fluid boundary waves with words? Verbally or visually?
I'll try: "There is diffraction in the light off of it and it's wavy, like <metaphor> but also like a <metaphor>"
> If it gets more abstract than math
There is a symbolic mathematical description of a [double slit experiment onto a fluid], but then sample each point in a CFD simulation and we're back to a frequentist sampling (and not yet a sufficiently predictive description of a continuum of complex reals)
Even without quantum or fluids to challenge language as a sufficient abstraction, mathematical syntax is already known to be insufficient to describe all Church-Turing programs even.
Church-Turing-Deutsch extends Church-Turing to cover quantum logical computers just: any qubit/qudit/qutrit/qnbit system is sufficient to simulate any other such system; but there is no claim to sufficiency for universal quantum simulation. When we restrict ourselves to the operators defined in modern day quantum logic, such devices are sufficient to simulate (or emulate) any other such devices; but observed that real quantum physical systems do not operate as closed systems with intentional reversibility like QC.
For example, there is a continuum of random in the quantum foam that is not predictable with and thus is not describeable by any Church-Turing-Deutsch program.
Gödel's incompleteness theorems: https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th... :
> Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability in formal axiomatic theories. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
ASM (Assembly Language) is still not the lowest level representation of code before electrons that don't split 0.5/0.5 at a junction without diode(s) and error correction; translate ASM to mathematical syntax (LaTeX and ACM algorithmic publishing style) and see if there's added value
> Even without quantum or fluids to challenge language as a sufficient abstraction, mathematical syntax is already known to be insufficient to describe all Church-Turing programs even.
"When CAN'T Math Be Generalized? | The Limits of Analytic Continuation" by Morphocular https://www.youtube.com/watch?v=krtf-v19TJg
Analytic continuation > Applications: https://en.wikipedia.org/wiki/Analytic_continuation#Applicat... :
> In practice, this [complex analytic continuation of arbitary ~wave functions] is often done by first establishing some functional equation on the small domain and then using this equation to extend the domain. Examples are the Riemann zeta function and the gamma function.
> The concept of a universal cover was first developed to define a natural domain for the analytic continuation of an analytic function. The idea of finding the maximal analytic continuation of a function in turn led to the development of the idea of Riemann surfaces.
> Analytic continuation is used in Riemannian manifolds, solutions of Einstein's [GR] equations. For example, the analytic continuation of Schwarzschild coordinates into Kruskal–Szekeres coordinates. [1]
But Schwarzschild's regular boundary does not appear to correlate to limited modern observations of such "Planc relics in the quantum foam"; which could have [stable flow through braided convergencies in an attractor system and/or] superfluidic vortical dynamics in a superhydrodynamic thoery. (Also note: Dirac sea (with no antimatter); Godel's dust solutions; Fedi's unified SQS (superfluid quantum space): "Fluid quantum gravity and relativity" with Bernoulli, Navier-Stokes, and Gross-Pitaevskii to model vortical dynamics)
Ostrowski–Hadamard gap theorem: https://en.wikipedia.org/wiki/Ostrowski%E2%80%93Hadamard_gap...
> For example, there is a continuum of random in the quantum foam that is not predictable with and thus is not describeable [sic]
From https://news.ycombinator.com/item?id=37712506 :
>> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
> The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.
If there cannot be a sufficient set of axioms for all mathematics, can there be a Unified field theory?
Unified field theory: https://en.wikipedia.org/wiki/Unified_field_theory
> translate ASM to mathematical syntax
On the utility of a syntax and typesetting, and whether it gains fidelity at lower levels of description
latexify_py looks neat; compared to sympy's often-unfortunately-reordered latex output: https://github.com/google/latexify_py/blob/main/docs/paramet...
"systemd-tmpfiles –purge" will delete /home in systemd 256
Why?
man systemd-tmpfiles https://www.freedesktop.org/software/systemd/man/latest/syst...
man tmpfiles.d https://www.freedesktop.org/software/systemd/man/latest/tmpf...
/home is specified as a path to create if it doesn't already exist.[1] Despite the name "tmpfiles", there is nothing necessarily temporary about /home being created.[2] Think of a system such as a mobile phone that has an read-only system partition always present, and on first boot of the system, /home is automatically created. /home is effectively considered to be "temporary" until the next "factory reset". The --purge command is deliberately meant to revert this temporarily created state by destroying /home and effectively returning the system back to just a read-only system partition.[3]
[1] https://github.com/systemd/systemd/blob/main/tmpfiles.d/home...
Why doesn't it delete /mnt, and /run/media, too?
Mountpoints are explicitly/hardcoded to never be umountable and removable via tmpfiles.d specification.[1] Possibly because such mountpoints are generally expected to be systemd .mount units, and as such unmounting and removal of paths would be tied in with systemd unit handling, rather than systemd tmpfiles.d handling? I suppose one could still create tmpfiles.d configuration file that handles files within a fixed mountpoint of /mnt/alwaysmounted/tmp/ as I couldn't see anything in tmpfiles.c[1] at a quick glance that would prevent this occurring.
[1] https://github.com/systemd/systemd/blob/main/src/tmpfiles/tm...
[2] https://www.freedesktop.org/software/systemd/man/latest/syst...
Quantum entangled photons react to Earth's spin
> "The core of the matter lays in establishing a reference point for our measurement, where light remains unaffected by Earth's rotational effect. Given our inability to halt Earth's from spinning, we devised a workaround: splitting the optical fiber into two equal-length coils and connecting them via an optical switch," explains lead author Raffaele Silvestri.
> By toggling the switch on and off, the researchers could effectively cancel the rotation signal at will, which also allowed them to extend the stability of their large apparatus. "We have basically tricked the light into thinking it's in a non-rotating universe," says Silvestri.
"Experimental observation of Earth's rotation with quantum entanglement" (2024) https://www.science.org/doi/10.1126/sciadv.ado0215
Sagnac interferometer: https://en.wikipedia.org/wiki/Sagnac_effect
So almost like an optical wheatstone bridge ?
Wheatstone bridge > Modifications of the basic bridge: https://en.wikipedia.org/wiki/Wheatstone_bridge#Modification... :
> Carey Foster bridge, for measuring small resistances; Kelvin bridge, for measuring small four-terminal resistances; Maxwell bridge, and Wien bridge for measuring reactive components; Anderson's bridge, for measuring the self-inductance of the circuit, an advanced form of Maxwell's bridge
Is there a rectifier for gravitational waves, and what would that do?
Diode bridge > Current flow: https://en.wikipedia.org/wiki/Diode_bridge#Current_flow
And actually again, electron flow is fluidic:
- "How does electricity find the "Path of Least Resistance"?" (and the other paths) https://www.youtube.com/watch?v=C3gnNpYK3lo
- "Observation of current whirlpools in graphene at room temperature" (2024) https://news.ycombinator.com/item?id=40360684 :
> How are spin currents and vorticity in electron vortices related?
But, back to photons from electrons; like in a wheatstone bridge.
Are photonic fields best described with rays, waves, or as fluids in gravitational vortices like in SPH and SQS and CFD? (Superhydrodynamic, Superfluid Quantum Space, Computational Fluid Dynamics)
Actually, photons do interact with photons; as phonons in matter: "Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315 https://news.ycombinator.com/item?id=40600762
Perhaps there's an even simpler sensor apparatus for this experiment?
Recycling plastic is a dangerous waste of time
From https://news.ycombinator.com/item?id=40335733 :
> "Making hydrogen from waste plastic could pay for itself" (2024) https://news.ycombinator.com/item?id=37886955 ; flash heating plastic yields Graphene and Hydrogen
> "Synthesis of Clean Hydrogen Gas from Waste Plastic at Zero Net Cost" (2023) https://onlinelibrary.wiley.com/doi/10.1002/adma.202306763
"Fungus breaks down ocean plastic" (2024) https://news.ycombinator.com/item?id=40676239
What does "Zero Net Cost" mean and how does that relate to margin?
Voyager 1 is back online: NASA spacecraft returns data from all 4 instruments
I always joke that NASA should win the nobel prize in engineering for their work on the mars rovers. where the punchline is that there is no nobel prize for engineering... I didn't say it was a good joke.
But the voyager missions... wow. NASA should totally win the nobel prize in engineering for them. What an accomplishment.
WARC-GPT: An open-source tool for exploring web archives using AI
Looking into this, could be very useful for working with personal Zotero archives of webpages
As previously seen on HN:
paper-qa: https://github.com/whitead/paper-qa :
> LLM Chain querying documents with citations [e.g. a scientific Zotero library]
> This is a minimal package for doing question and answering from PDFs or text files (which can be raw HTML). It strives to give very good answers, with no hallucinations, by grounding responses with in-text citations.
pip install paper-qa
> If you use Zotero to organize your personal bibliography, you can use the paperqa.contrib.ZoteroDB to query papers from your library, which relies on pyzotero. Install pyzotero to use this feature: pip install pyzotero
> If you want to use [ paperqa with pyzotero ] in an jupyter notebook or colab, you need to run the following command: import nest_asyncio
nest_asyncio.apply()
... there is also neuml/paper-ai w/ paperetl; https://news.ycombinator.com/item?id=39363115 : gh topic: pdfgpt, chatpdf,neuml/paperai has a YAML report definition schema: https://github.com/neuml/paperai :
> Semantic search and workflows for medical/scientific paper
python -m paperai.report report.yml 50 md <path to model directory>
> The following [columns and answers] report [output] formats are supported: [Markdown, CSV], Annotation - Columns and answers are extracted from articles with the results annotated over the original PDF files. Requires passing in a path with the original PDF files.The Implications of Ending Groundwater Overdraft for Global Food Security
"The Implications of Ending Groundwater Overdraft for Global Food Security" Nature Sustainability (2024) https://www.nature.com/articles/s41893-024-01376-w
"Study emphasizes trade-offs between arresting groundwater depletion and food security" (2024) by International Food Policy Research Institute https://phys.org/news/2024-06-emphasizes-offs-groundwater-de... :
> [...] a transdisciplinary approach combining regulatory, financial, technological, and awareness measures across water and food systems is essential to achieve sustainable groundwater management while preventing increased food insecurity.
Overdrafting: https://en.wikipedia.org/wiki/Overdrafting :
> Overdrafting is the process of extracting groundwater beyond the equilibrium yield of an aquifer. Groundwater is one of the largest sources of fresh water and is found underground. The primary cause of groundwater depletion is the excessive pumping of groundwater up from underground aquifers. Insufficient recharge can lead to depletion, reducing the usefulness of the aquifer for humans. Depletion can also have impacts on the environment around the aquifer, such as soil compression and land subsidence, local climatic change, soil chemistry changes, and other deterioration of the local environment.
From "How a Solar Revolution in Farming Is Depleting Groundwater" (2024) https://news.ycombinator.com/item?id=39731290 :
> - "Desalination system could produce freshwater that is cheaper than tap water" (2023) : "Extreme salt-resisting multistage solar distilation with thermohaline convection" (2023) https://news.ycombinator.com/item?id=39507692
ChromeOS will soon be developed on large portions of the Android stack
Android Binder is already rewritten in Rust.
"Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024) https://news.ycombinator.com/item?id=40680722
ChromiumOS, ChromeOS Flex,
https://news.ycombinator.com/item?id=36998964 :
> Students can't run `git --help`, `python`, or even `adb` on Chromebooks.
> (Students can't run `git` on a Chromebook without transpiling to WASM (on a different machine) and running all apps as the same user account without SELinux or `ls -Z` on the host).
> Students can run `bash`, `git`, and `python` on Linux / Mac / or Windows computers.
> There should be a "Computers for STEM" spec.
Docker containers can be run on Android with root with termux:
- https://gist.github.com/FreddieOliveira/efe850df7ff3951cb62d...
Podman and podman-desktop run rootless containers (without having root).
containers/container-selinux: https://github.com/containers/container-selinux/tree/main .spec: https://src.fedoraproject.org/rpms/container-selinux/blob/ra...
"The best WebAssembly runtime may be no runtime" https://news.ycombinator.com/item?id=38609105 : google/minijail, SyscallFilter=,
E.g. https://vscode.dev/ runs in a browser tab on a Chromebook and supports Pyodide and IIUC hopefully more native Python in WASM support soon?
Hopefully Bash, Git, and e.g. Python will work offline on the kids' Chromebooks someday.
Nvidia Warp: A Python framework for high performance GPU simulation and graphics
I love how many python to native/gpu code projects there are now. It's nice to see a lot of competition in the space. An alternative to this one could be Taichi Lang [0] it can use your gpu through Vulkan so you don't have to own Nvidia hardware. Numba [1] is another alternative that's very popular. I'm still waiting on a Python project that compiles to pure C (unlike Cython [2] which is hard to port) so you can write homebrew games or other embedded applications.
CuPy is also great – makes it trivial to port existing numerical code from NumPy/SciPy to CUDA, or to write code than can run either on CPU or on GPU.
I recently saw a 2-3 orders of magnitude speed-up of some physics code when I got a mid-range nVidia card and replaced a few NumPy and SciPy calls with CuPy.
Don’t forget JAX! It’s my preferred library for “i want to write numpy but want it to run on gpu/tpu with auto diff etc”
From https://news.ycombinator.com/item?id=37686351 :
>> sympy.utilities.lambdify.lambdify() https://github.com/sympy/sympy/blob/a76b02fcd3a8b7f79b3a88df... :
>> """Convert a SymPy expression into a function that allows for fast numeric evaluation""" [e.g. the CPython math module, mpmath, NumPy, SciPy, CuPy, JAX, TensorFlow, SymPy, numexpr,]
sympy#20516: "re-implementation of torch-lambdify" https://github.com/sympy/sympy/pull/20516
Fungus breaks down ocean plastic
Microplastic accumulation in the body makes me wonder if natural biopolymers could have the same problem. We cannot break down cellulose; what happens to micro-cellulose in the body? Or lignin, which is even more refractory to decomposition?
Do plant microfibers accumulate in the body over time, like plastic or asbestos fibers? Do we end up loaded with this stuff in old age?
One of the most lethal professions of old was baker, because of all the flour dust inhaled.
The breakdown of lignin by fungi shows the lengths organisms have to go to decompose refractory organic materials. A whole suite of enzymes and associated compounds are released into the extracellular environment by these fungi, including hydrogen peroxide, some of which is decomposed to highly oxidizing hydroxyl radicals. This also shows why it should not be surprising fungi are capable of attacking plastics, at least to some extent: they are already blasting biopolymers with a barrage of non-specific highly reactive oxidants.
I wonder if there’s a type of fungus humans could eat that would break down unwanted fibers and microplastics?
I don't think I'd want my internals exposed to the stuff fungi have to excrete to break down the fibers. Fenton chemistry is used to clean lab glassware, I think.
https://en.wikipedia.org/wiki/Fenton%27s_reagent
Review article on fungal degradation of lignin:
Are microplastics fat soluble and/or bound with the cholesterols and sugar alcohols that cake the arteries?
- "Cyclodextrin promotes atherosclerosis regression via macrophage reprogramming" https://www.science.org/doi/10.1126/scitranslmed.aad6100 https://www.popsci.com/compound-in-powdered-alcohol-can-also...
Cellulose and Lignin are dietary fiber:
- "Dietary cellulose induces anti-inflammatory immunity and transcriptional programs via maturation of the intestinal microbiota" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7583510/ :
> Dietary cellulose is an insoluble fiber and consists exclusively of unbranched β-1,4-linked glucose monomers. It is the major component of plant cell walls and thus a prominent fiber in grains, vegetables and fruits. Whereas the importance of cellulolytic bacteria for ruminants was described already in the 1960s, it still remains enigmatic whether the fermentation of cellulose has physiological effects in monogastric mammals. [6–11] Under experimental conditions, it has been shown that the amount of dietary cellulose influences the richness of the colonic microbiota, the intestinal architecture, metabolic functions and susceptibility to colitis. [12,13] Moreover, mice fed a cellulose-enriched diet were protected from experimental autoimmune encephalomyelitis (EAE) through changes in their microbial and metabolic profiles and reduced numbers of pro-inflammatory T cells.
But what about fungi in the body and diet?
What about lignin; Is lignin dietary fiber?
From "Dietary fibre in foods: a review" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7583510/ :
> Dietary fibre includes polysaccharides, oligosaccharides, lignin and associated plant substances.
From https://news.ycombinator.com/item?id=40649844 re: sustainable food packaging solutions :
> Cellulose and algae are considered safe for human consumption and are also biodegradable; but is that an RCT study?
> CO2 + Lignin is not edible but is biodegradable and could replace plastics.
>> "CO2 and Lignin-Based Sustainable Polymers with Closed-Loop Chemical Recycling" (2024) https://onlinelibrary.wiley.com/doi/10.1002/adfm.202403035
> What incentives would incentivize the market to change over to sustainable biodegradable food-safe packaging?
Industry forms consortium to drive adoption of Rust in safety-critical systems
> The consortium aims to develop guidelines, tools, libraries, and language subsets to meet industrial and legal requirements for safety-critical systems.
> Moreover, the initiative seeks to incorporate lessons learned from years of development in the open source ecosystem to make Rust a valuable component of safety toolkits across various industries and severity levels
Resources and opportunities for a safety critical Rust initiative:
- "The First Rust-Written Network PHY Driver Set to Land in Linux 6.8" https://news.ycombinator.com/item?id=38677600
- awesome-safety-critical > Software safety standards: https://awesome-safety-critical.readthedocs.io/en/latest/#so...
- rust smart pointers: https://news.ycombinator.com/item?id=33563857 ; LLVM signed pointers for pointer authentication: https://news.ycombinator.com/item?id=40307180
From https://news.ycombinator.com/item?id=33563857 :
> - Secure Rust Guidelines > Memory management, > Checklist > Memory management: https://anssi-fr.github.io/rust-guide/05_memory.html
Rust OS projects to safety critical with the forthcoming new guidelines: Redox, Cosmic, MotorOS, Maestro, Aerugo
- "MotorOS: a Rust-first operating system for x64 VMs" https://news.ycombinator.com/item?id=38907876: "Maestro: A Linux-compatible kernel in Rust" (2023) https://news.ycombinator.com/item?id=38852360#38857185 ; redox-os, cosmic-de , Motūrus OS; MotorOS
- https://news.ycombinator.com/item?id=38861799 : > COSMIC DE (Rust-based) supports rust-windowing/winit apps, which compile to a <canvas> tag in WASM.
> winit: https://github.com/rust-windowing/winit
- "Aerugo – RTOS for aerospace uses written in Rust" https://news.ycombinator.com/item?id=39245897
- "The Rust Implementation of GNU Coreutils Is Becoming Remarkably Robust" https://news.ycombinator.com/item?id=34743393
From a previous Ctrl-F rust,; "Rust in the Linux kernel" (2021) https://news.ycombinator.com/item?id=35783214 :
- > Is this the source for the rust port of the Android binder kernel module?: https://android.googlesource.com/platform/frameworks/native/...
> This guide with unsafe rust that calls into the C, and then with next gen much safer rust right next to it would be a helpful resource too.
From https://news.ycombinator.com/item?id=34744433 ... From "Are software engineering “best practices” just developer preferences?" https://news.ycombinator.com/item?id=28709239 :
>>>>> Which universities teach formal methods?
/?hnlog "TLA" and "side channel"
Land value tax in online games and virtual worlds (2022)
> Land Crisis [on the internet, too]
Land Value Tax: https://en.wikipedia.org/wiki/Land_value_tax
Virtual economies: https://en.wikipedia.org/wiki/Virtual_economies
Category:Virtual economies: https://en.wikipedia.org/wiki/Category:Virtual_economies
Gold farming > Secondary effects on in-game economy; inflation,: https://en.wikipedia.org/wiki/Gold_farming#Secondary_effects...
WSE: World Stock Exchange (2007-2008) https://en.wikipedia.org/wiki/World_Stock_Exchange
The opportunity costs of spending GPU hours on various alternatives.
Economy of Second Life > Monetary policy: https://en.wikipedia.org/wiki/Economy_of_Second_Life
Circle, Centre > History: https://en.wikipedia.org/wiki/Circle_(company)
Stablecoin > Reserve-backed stablecoins: https://en.wikipedia.org/wiki/Stablecoin
Economic stability: https://en.wikipedia.org/wiki/Economic_stability
Stabilization policy > Crisis stabilization: https://en.wikipedia.org/wiki/Stabilization_policy#Crisis_st...
Macroeconomics > Macroeconomic policy; a "policy wonk" may also be a "quant" https://en.wikipedia.org/wiki/Macroeconomics#Macroeconomic_p...
Clearing house (finance) > https://en.wikipedia.org/wiki/Clearing_house_(finance) :
> Impact: A 2019 study in the Journal of Political Economy found that the establishment of the New York Stock Exchange (NYSE) clearinghouse in 1892 "substantially reduced volatility of NYSE returns caused by settlement risk and increased asset values", indicating "that a clearinghouse can improve market stability and value through a reduction in network contagion and counterparty risk."
From https://news.ycombinator.com/item?id=40649612 :
> Interledger > Peering, Clearing and Settling: https://interledger.org/developers/rfcs/peering-clearing-set...
But your problem's this land bubble and then unfortunately unrealized losses and how does that happen
ADB tips and tricks: Commands that every power user should know about (2023)
ADB: Android Debug Bridge: https://en.wikipedia.org/wiki/Android_Debug_Bridge
- Scrcpy does remote GUI over ADB: https://en.wikipedia.org/wiki/Scrcpy
New theory links quantum geometry to electron-phonon coupling
Also referenced in a comment the other day on "Physicists Puzzle over Emergence of Electron Aggregates" https://news.ycombinator.com/item?id=40524238#40547258
21.2× faster than llama.cpp? plus 40% memory usage reduction
Project Github link:https://github.com/SJTU-IPADS/PowerInfer
The video/litepaper they linked is also a great read: https://powerinfer.ai/v2/
Truly mind-blowing to see Mixtral 47B running on a smartphone.
> TL;DR PowerInfer is a CPU/GPU LLM inference engine leveraging activation locality for your device.
So is it running on the phone or not?
From the abstract https://arxiv.org/abs/2406.06282 :
> Notably, PowerInfer-2 is the first system to serve the TurboSparse-Mixtral-47B model with a generation rate of 11.68 tokens per second on a smartphone. For models that fit entirely within the memory, PowerInfer-2 can achieve approximately a 40% reduction in memory usage while maintaining inference speeds comparable to llama.cpp and MLC-LLM
What does this do with a 40 TOPS+ NPU/TPU?
Python wheel filenames have no canonical form
There’s also no package caching across venvs(?). Every time I have to install something AI it’s the same 2GB torch + few hundred megabytes libs + cuda update of downloads again and again. On top of that, python projects love to start downloading large resources without asking, and if you ctrl-c it, the setup process is usually non-restartable. You’re just left with a broken folder that doesn’t run anymore. Feels like everything about python’s “install” is an afterthought at best.
It would be nice if all packages (and each version) were cached somewhere global and then just referred to by a local environment, to deduplicate. I think Maven does this.
The downloaded packages themselves are shared in pip's cache. Once installed though they cannot be shared, for a number of reasons:
- Compiled python code is (can be) generated and stored alongside the installed source files, sharing that installed package between envs with different python versions would lead to weird bugs
- Installed packages are free to modify their own files (especially when they embed data files)
- Virtualenv are essentially non relocatable and installed package manifests use absolute paths
Really that shouldn't be a problem though. Downloaded packages are cached by pip, as well as locally built wheels of source packages, meaning once you've installed a bunch of packages already, installing new ones in new environments is a mere file extraction.
The real issue IMHO causing install slowness is that package maintainers have not taken the habit to be mindful of the size of their packages. If you install multiple data science packages for instances, you can quickly get to multi GB virtualenvs, for the sole reason that package maintainers didn't bother to remove (or separate as "extra") their test datasets, language translation files, static assets, etc.
They can be, pip developers just have to care about this. Nothing you described precludes file-based deduplication, which is what pnpm does for JS projects: it stores all library files in a global content-addressable directory, and creates {sym,hard,ref}links depending on what your filesystem supports.
Being able to mutate files requires reflinks, but they're supported by e.g. XFS, you don't have to go to COW filesystems that have their own disadvantages.
You can do something like that manually for virtualenvs too, by running rmlint on all of them in one go. It will pick an "original" for each duplicate file, and will replace duplicates with reflinks to it (or other types of links if needed). The downside is obvious: it has to be repeated as files change, but I've saved a lot of space this way.
Or just use a filesystem that supports deduplication natively like btrfs/zfs.
This is unreasonably dismissive: the `pip` authors care immensely about maintaining a Python package installer that successfully installs billions of distributions across disparate OSes and architectures each day. Adopting deduplication techniques that only work on some platforms, some of the time means a more complicated codebase and harder-to-reproduce user-surfaced bugs.
It can be worth it, but it's not a matter of "care": it's a matter of bandwidth and relative priorities.
"Having other priorities" uses different words to say exactly the same thing. I'm guessing you did not look at pnpm. It works on all major operating systems; deduplication works everywhere too, which shows that it can be solved if needed. As far as I know, it has been developed by one guy in Ukraine.
Send a PR!
Are there package name and version disclosure considerations when sharing packages between envs with hardlinks and does that matter for this application?
Practically, caching ~/.pip/cache should save resources; From "What to do about GPU packages on PyPI?" https://news.ycombinator.com/item?id=27228963 :
> "[Discussions on Python.org] [Packaging] Draft PEP: PyPI cost solutions: CI, mirrors, containers, and caching to scale" [...]
> How to persist ~/.cache/pip between builds with e.g. Docker in order to minimize unnecessary GPU package re-downloads:
RUN --mount=type=cache,target=/root/.cache/pip
RUN --mount=type=cache,target=/home/appuser/.cache/pip
Researchers make a supercapacitor from water, cement, and carbon black
I'm happy this invention is still in the news cycle after the initial announcement 10 months ago...
[0] https://news.ycombinator.com/item?id=36958531
[1] https://news.ycombinator.com/item?id=36993411
[2] https://news.ycombinator.com/item?id=36951089
While I'm fairly dubious about the proposed dual-purpose structural implementation of this material -- if this works at scale it would be a boon for low cost DIY local energy storage in the developing world and remote areas in other places.
The idea that someone with minimal education/training can construct a durable electrical storage solution using commonly available materials and techniques is an absolute game changer!!
- "Low-cost additive turns concrete slabs into super-fast energy storage" (2023) https://news.ycombinator.com/item?id=36964735
Google shuts down GPay app and P2P payments in the US
I am exhausted switching services. please can we pick one and be done with it?
How about adversarial integration? Why can't I send money from cashapp to venmo? Is the technology not there yet?
I wish the post office was a bank and they hosted their own pay apps.
FedNow is being implemented now across the industry. It is the "SEPA" for the USA. It will replace the 3rd party hacks (covering up the slow and tedious nature of ACH) that currently exist via Venmo, Cashapp, Zelle[0], etc
[0]: Zelle is ran by a consortium of FIs. I don't know for certain what will happen to it, but I suspect if it stays around in name it would switch to the FedNow rail behind the scenes.
/? fednow API: https://www.google.com/search?q=fednow+api [...]
- From https://explore.fednow.org/resources?id=67#_Who_should_downl... :
> Who should download the FedNow ISO 20022 message specifications and how can I access them?
> The Federal Reserve is using the MyStandards® platform to provide access to the FedNow ISO 20022 message specifications and accompanying implementation guide. You can access these on the Federal Reserve Financial Services portal (Off-site) under the FedNow Service. Users need a MyStandards account, which you can create on the SWIFT website (Off-site). View our step-by-step guide (PDF, Off-site) for tips on accessing the specifications.
FedNow charges a $0.045/tx fee. (edit: And a waived $25/mo fee [...] https://explore.fednow.org/explore-the-city?id=3&building=&p... )
From "X starts experimenting with a $1 per year fee for new users" https://news.ycombinator.com/item?id=37933366 ... From "Anatomy of an ACH transaction" (2023) https://news.ycombinator.com/item?id=36503888 :
> The WebMonetization spec [4] and docs [5] specifies the `monetization` <meta> tag for indicating where supporting browsers can send payments and micropayments:
<meta
name="monetization" content="$<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">http://wallet.example.com/alice" target="_blank" rel="nofollow noopener">wallet.example.com/alice">
And from "Paying Netflix $0.53/H, etc." https://news.ycombinator.com/item?id=38685348 :> W3C Web Monetization does micropayments with a <$0.01/tx fee, but not KYC or AML.
> [Web Monetization] is an open spec: https://webmonetization.org/specification/
Built on W3C Interledger: https://interledger.org/interledger :
> Built on modern technologies, Interledger can handle up to 1 million transactions per second per participant
> ILP is not tied to a single company, payment network, or currency.
Interledger > Peering, Clearing and Settling: https://interledger.org/developers/rfcs/peering-clearing-set...
Can FedNow handle micropayments volumes? > Micropayments: https://en.wikipedia.org/wiki/Micropayment
Does FedNow integrate with Interledger; i.e. is there support for "ILP Addresses"?
Some of these sentences seem sort of related. You're just linking to Google searches and providing random snippets from linked sites. Overall this comment and many of the comments in your history look like spam.
Compared to aigen with citations found for whatever was generated without consideration for truth and ethical reasoning?
The heuristics should be improved, then; we shouldn't ask me to put these in PDFs without typed edges to URLs and schema.org/Datasets, to support a schema.org/Claim or a ClaimReview in a SocialMediaPosting about a NewsArticle about a ScholarlyArticle :about a Thing skos:Concept with a :url.
Would you have deep linked your way to those resources (which are useful enough to be re-cited here for later reference) without the researcher doing it for you?
Technical writing courses for example advise to include the searches that you used in the databases that you found oracles in, in order to help indicate the background research bias prior to the hypothesis at least.
Transparency and Accountability: https://westurner.github.io/hnlog/
So, this sort of cited input is valuable and should be valued not as link spam but as linked edge-dense research sans generatble prose.
> Technical writing courses for example advise to include the searches that you used in the databases that you found oracles in, in order to help indicate the background research bias prior to the hypothesis at least.
I'm not writing a doctoral thesis. These comments are not my livelihood. I'm just sharing industry knowledge on a topic I know about.
Your random links without context, some of the links to literal Google searches, are not in any way useful or interesting.
It's the style of comments you get on a lot of conspiracy sites. I see comment styles like that on a lot of articles linked from places like rense.com.
No, these are all hand prepared.
This is called research, and it does require citations.
Please do not harass overachievers who do the research work instead of just talking to talk.
#areyouarobotorsimilar
I certainly didn't think the comments weren't human-generated. I was actually defending you. I just think your style of commenting comes across as unusual here, which is why it attracts attention. On other parts of the Internet the style you use is more common and would not raise eyebrows.
FDA denies petition against use of phthalates in food packaging
I was surprised to learn that some of the highest concentrations of phthalates can be found in alcoholic beverages in glass bottles (Table 3: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7460375/ )
This occurs due to the plastics used in producing the alcoholic beverages combined with ethanol (alcohol) acting as a solvent to liberate the phthalates
I was equally surprised to learn that the food industry was already moving toward using fewer phthalate plasticizers already. More recent sampling efforts have actually found fewer or sometimes no phthalates at all in tubing, whereas decades ago it was ubiquitous.
Which plastics could and should be replaced with other materials?
"How Do You Know If Buckets Are Food Grade" https://epackagesupply.com/blog/how-do-you-know-if-buckets-a... :
> Buckets made of HDPE (number 2) are generally considered the best material for food storage, especially over the long term. A vast majority of plastic buckets that are sold for food storage purposes will be made of HDPE. It’s important to note that not all HDPE buckets are food grade; to be sure, you’ll want to look for the cup and fork logo (described below) or other indication of “food safe” or “food grade” materials.
> Cup and Fork: Elsewhere on the bottom of the bucket, some food-grade buckets will have a symbol consisting of a cup and fork on them. There may also be markings like “USDA approved” or “FDA approved.”
A bunch of comments here are roughly "there's not great evidence that these are harmful" -- but is that the right standard? Or should there be an obligation to provide evidence that something is "safe" in order to use it in some consumer applications (like food packaging)? It's tempting to say that we should conclusively establish that something isn't harmful before we use it absolutely everywhere ... but what standard of evidence would be both achievable and ethical?
Because generally people aren't getting acutely ill after eating food from a plastic package, we're left with the possibility that accumulative impacts over years might be harmful -- but it doesn't seem feasible to run long term studies where a treatment group is exposed to plastics for decades and a control group is not. It hardly seems achievable to do correlation studies, because you often don't know what's been in the packaging for all the food you've consumed, which may not even be in your control.
Your second paragraph is a very good argument against requiring that everything used in food packaging be shown to be safe before it can be used.
Can we reasonably run such a study to prove that wax paper is safe? What about plain paper? Do we just require all food producers to use no packaging at all until these controlled longitudinal studies are completed? If we allow them to use packaging, how do we define in a principled way what packaging is allowed while there are still unknowns?
One possibility would be to say that if something is currently widely used in some significant (think 5%) portion of the industry then we allow it, but that has two problems: First, it wouldn't exclude phthalates anyway, so it doesn't address the current concern. Second, it might exclude future packaging materials that we think might be safer than our current materials but which have yet to be tested.
Cellulose and algae are considered safe for human consumption and are also biodegradable; but is that an RCT study?
CO2 + Lignin is not edible but is biodegradable and could replace plastics. "CO2 and Lignin-Based Sustainable Polymers with Closed-Loop Chemical Recycling" https://onlinelibrary.wiley.com/doi/10.1002/adfm.202403035
What incentives would incentivize the market to change over to sustainable biodegradable food-safe packaging?
Possible exposure of Earth to dense interstellar medium 2-3M years ago
Are there fossil records that correspond to higher levels of dense interstellar medium radiation 2-3M years ago?
Punctuated equilibrium: https://en.wikipedia.org/wiki/Punctuated_equilibrium
Punctuated equilibrium is not about higher mutation rates or even faster evolution, but about long periods of _stasis_ interrupted by "normal rate" evolution due to an upset in the equilibrium such as a sudden geographic isolation of two populations.
Was there a higher rate of cladogenesis 2-3M years ago?
Higher levels of radiation do correlate to higher rates of genetic mutation; presumably because radiation so affects transcription.
Does the punctuated equilibrium theory model radiation level in estimating rate of evolution?
Rate of evolution: https://en.wikipedia.org/wiki/Rate_of_evolution
Show HN: Thread – AI-powered Jupyter Notebook built using React
Hey HN, we're building Thread (https://thread.dev/) an open-source Jupyter Notebook that has a bunch of AI features built in. The easiest way to think of Thread is if the chat interface of OpenAI code interpreter was fused into a Jupyter Notebook development environment where you could still edit code or re-run cells. To check it out, you can see a video demo here: https://www.youtube.com/watch?v=Jq1_eoO6w-c
We initially got the idea when building Vizly (https://vizly.fyi/) a tool that lets non-technical users ask questions from their data. While Vizly is powerful at performing data transformations, as engineers, we often felt that natural language didn't give us enough freedom to edit the code that was generated or to explore the data further for ourselves. That is what gave us the inspiration to start Thread.
We made Thread a pip package (`pip install thread-dev`) because we wanted to make Thread as easily accessible as possible. While there are a lot of notebooks that improve on the notebook development experience, they are often cloud hosted tools that are hard to access as an individual contributor unless your company has signed an enterprise agreement.
With Thread, we are hoping to bring the power of LLMs to the local notebook development environment while blending the editing experience that you can get in a cloud hosted notebook. We have many ideas on the roadmap but instead of building in a vacuum (which we have made the mistake of before) our hope was to get some initial feedback to see if others are as interested in a tool like this as we are.
Would love to hear your feedback and see what you think!
This seems cool! Is there a way to try it locally with an open LLM? If you provide a way to set the OpenAI server URL and other parameters, that would be enough. Is the API_URL server documented, so a mock a local one can be created?
Thanks.
Definitely, it is something we are super focused on as it seems to be a use case that is important for folks. Opening up the proxy server and adding local LLM support is my main focus for today and will hopefully update on this comment when it is done :)
From https://news.ycombinator.com/item?id=39363115 :
> https://news.ycombinator.com/item?id=38355385 : LocalAI, braintrust-proxy; [and promptfoo, chainforge]
(Edit)
From "Show HN: IPython-GPT, a Jupyter/IPython Interface to Chat GPT" https://news.ycombinator.com/item?id=35580959#35584069
I built an ROV to solve missing person cases
/? underwater infrared camera: https://www.google.com/search?q=underwater+infrared+camera
r/rov: https://www.reddit.com/r/rov/
Bioradiolocation: https://en.wikipedia.org/wiki/Bioradiolocation
FMCW: https://en.wikipedia.org/wiki/Continuous-wave_radar
mmWave (60 Ghz) can do heartbeat detection above water FWIU. As can WiFi.
mmwave (millimeter wave), UWA (Underwater Acoustic)
Citations of "Analysis and estimation of the underwater acoustic millimeter-wave communication channel" (2016) https://scholar.google.com/scholar?cites=8297460493079369585...
Citations of "Wi-Fi signal analysis for heartbeat and metal detection: a comparative study of reliable contactless systems" https://scholar.google.com/scholar?cites=3926358377223165726...
/? does WiFi work underwater? https://www.google.com/search?q=does+wifi+work+underwater
https://scholar.google.com/scholar?cites=1308760257416493671... ... "Environment-independent textile fiber identification using Wi-Fi channel state information", "Measurement of construction materials properties using Wi-Fi and convolutional neural networks"
"Underwater target detection by measuring water-surface vibration with millimeter-wave radar" https://scholar.google.com/scholar?cites=1710768155624387794... :
> UWSN (Underwater Sensor Network)
I'm reminded of Baywatch S09E01; but those aren't actual trained lifeguards. The film Armageddon works as a training film because of all of the [safety,] mistakes: https://www.google.com/search?q=baywatch+s09e01
Can you pump water without any electricity?
Observed that the Second Law of Thermodynamics holds with water pumps, too
A third way: evaporation due to relative humidity
A fourth way to raise water without electricity: solar concentration to produce steam in a loop
"Using solar energy to generate heat at 1050°C high temperatures" https://news.ycombinator.com/item?id=40419617
> A fourth way to raise water without electricity: solar concentration to produce steam in a loop
Nice. Perhaps some sort of system leveraging this application be used to charge batteries on a local, distributed grid? Or, simultaneously charge your electric car and heat/cool a home on a sunny day.
Is solar concentration more or less efficient than loud electric pumps at raising water for a water tower, which stores gravitational potential energy and pressurizes the tubes?
Could heat a water tank full of gravel or sand.
"Engineers develop 90-95% efficient electricity storage with piles of gravel" https://news.ycombinator.com/item?id=39927280
How efficiently does a solar concentrator fill a tank of water at a higher altitude?
(How) Does relative humidity at the top and bottom of the tube significantly affect fill/lift rate?
Is PV to batteries to pumps more or less efficient than solar concentrated steam?
A fifth way to raise water: n-body gravity; wait for the moon and the tides
A sixth way to raise water through a pipe possibly at an angle possibly into a water tank: MHD: Magnetohydrodynamic drive.
- "Designing a Futuristic Magnetic Turbine (MHD drive)" @PlasmaChannel https://youtube.com/watch?v=WgAIPOSc4TA&"
- "A Compact Electrodynamic Pump using Copper and TPU" https://news.ycombinator.com/item?id=40635173
- Is MHD more or less efficient than the other methods?
(Edit) I wouldn't matter whether the pipe to lift water though is at an angle; Is my intuition bad on it this?
Shallower orbital trajectories are preferred to perpendicular to the gravitational field of the greatest local mass because: is it just Max Q or also total energy?
But the downward force of a column of water in a tube at an angle is no greater than the two pointing straight up, is it?
- Herodotus > Life > Early Travels: https://en.wikipedia.org/wiki/Herodotus#Early_travels ; https://www.secretofthepyramids.com/herodotus:
> For this, they said, the ten years were spent, and for the underground chambers on the hill upon which the pyramids stand, which he caused to be made as sepulchral chambers for himself in an island, having conducted thither a channel from the Nile
- From https://news.ycombinator.com/item?id=39246398:
> According to the video series on "How the pyramids were built" at https://thepump.org/ by The Pharaoh's Pump Society, the later pyramids had a pool of water at the topmost layer of construction such that they could place and set water-tight blocks of carved stone using a crane barge that everybody walked to the side of to lift.
- "Egypt's pyramids may have been built on a long-lost branch of the Nile" (2024) https://news.ycombinator.com/item?id=40410572 :
- Edward Kunkel - "The Pharaoh's Pump" (1967,1977) https://www.google.com/books/edition/_/Vq04nQEACAAJ?hl=en https://search.worldcat.org/formats-editions/4049868
- Stephen Myers: https://www.amazon.com/stores/Steven-Myers/author/B003GNIQ4K : "How the Great Pyramid Operated as a Water Pump" https://www.youtube.com/playlist?list=PLPD2zVdMCA-gPa6RqoEHX...
- Chris Massey: https://www.amazon.com/stores/Chris-Massey/author/B00DWXY3FU : "How were the pyramids of egypt really built - Part 1" https://www.youtube.com/watch?v=TJcp13hAO3U&t=401s
- Lingams, though, produce electricity; and there may be electrode marks on some ancient prehistoric possibly geopolymer masonry stones. AI suggested that low level electricity in geopolymer would more quickly remove water from forming "blocks".
- Praveen Mohan: https://www.youtube.com/channel/UCe3OmUXohXrXnNZSRl5Z9kA
- /? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
- Was the Osireon a hydraulic lift facility without electricity? FWIU the Osireon is a floating megalithic stone block island over a different water source which flows when pressure is applied to it? [citation: one of the following yt videos IIRC]
/?youtube osireon https://www.youtube.com/results?search_query=osireion :
- "Secrets of the Osirion | Who Built Egypt's Biggest Megalithic Temple? | Megalithomania" https://www.youtube.com/watch?v=sx-zRw8HEAo&t=271s
> [...] it could be much older with its huge megalithic, perfectly cut blocks resembling those in the 4th Dynasty Valley Temple at Giza. Strabo, who visited the Osirion in the first century BC, said that it was constructed by Ismandes, or Mandes (Amenemhet III), the same builder as the Labyrinth at Hawara. John Anthony West says there were huge floods that occurred in the distant past and the river silt is evident at the site, so could easily be dated. The Osirion at Abydos in Egypt could have had therapeutic effects, according to Dorothy Eady (Omm Sety) who says she got healed by its waters, as did many others. It also has beautiful geometric 'flower of life' carvings on one of the uprights which have yet to be explained, and nubs, polygonal masonry and intricate, subtle striations and smoothing of the stones, which would not look out of place in ancient Peru. Whether it was 'discovered' by Seti, or whether he built it during his reign is discussed in this video.
- "Pre-Egyptian Technology Left By an Advanced Civilization That Disappeared" https://www.youtube.com/watch?v=K3uiMsqptOs
- "The Megalithic Osirion of Egypt: Live Walkthrough and New Observations!" (2024) https://www.youtube.com/watch?v=-TtsKKYLxPM
- Osireion: https://en.wikipedia.org/wiki/Osireion#Purpose
- Interglacial > Specific interglacials: https://en.wikipedia.org/wiki/Interglacial#Specific_intergla...
- Lake Agassiz > Formation of beaches: https://en.wikipedia.org/wiki/Lake_Agassiz#Formation_of_beac...
A Compact Electrodynamic Pump using Copper and TPU
"Fiber pumps for wearable fluidic systems" (2024) https://www.science.org/doi/10.1126/science.ade8654
Electromagnetic pump: https://en.wikipedia.org/wiki/Electromagnetic_pump
MHD: Magnetohydrodynamic drive: https://en.wikipedia.org/wiki/Magnetohydrodynamic_drive
New "X-Ray Vision" Technique Sees Inside Crystals
Wouldn't this be useful if it were possible to match lattice defects to wave operators and wave functions for QC?
- "Single atom defect in 2D material can hold quantum information at room temp" (2024) https://news.ycombinator.com/item?id=40461325 https://news.ycombinator.com/item?id=40478219
- "GAN semiconductor defects could boost quantum technology" (2024) https://news.ycombinator.com/item?id=39365467
- "Reversible optical data storage below the diffraction limit" (2023) https://news.ycombinator.com/item?id=38528844
"Enabling three-dimensional real-space analysis of ionic colloidal crystallization” (2024) https://www.nature.com/articles/s41563-024-01917-w
Zero Tolerance for Bias
python has random.shuffle() and random.sample() with an MT Mersenne Twister PRNG for random. https://docs.python.org/3/library/random.html#random.shuffle Modules/_randommodule.c: https://github.com/python/cpython/blob/main/Modules/_randomm... , Library/random.py: https://github.com/python/cpython/blob/main/Lib/random.py#L3...
From "Uniting the Linux random-number devices" (2022) https://news.ycombinator.com/item?id=30377944 :
> > In 2020, the Linux kernel version 5.6 /dev/random only blocks when the CPRNG hasn't initialized. Once initialized, /dev/random and /dev/urandom behave the same. [17]
From https://news.ycombinator.com/item?id=37712506 :
> "lock-free concurrency" [...] "Ask HN: Why don't PCs have better entropy sources?" [for generating txids/uuids] https://news.ycombinator.com/item?id=30877296
> "100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations" https://link.aps.org/doi/10.1103/PRXQuantum.4.010330
google/paranoid_crypto.lib.randomness_tests: https://github.com/google/paranoid_crypto/tree/main/paranoid...
Google Mesop: Build web apps in Python
This tool fills a very narrow use case for demos of AI chat.
If you know Python and can run inference models on a colab then you can quickly whip up a demo UI with builtin components like chat with text and images. Arguably everyone is testing these apps so making them easy is cool for demo and prototyping.
For anything more than a demo you don't use this.
What other use cases is this tool insufficient for?
BentoML already wins at model hosting in Python IMHO.
What limits this to demos?
IIRC there are a few ways to do ~ipywidgets with react patterns in notebooks, and then it's necessary to host notebooks for users with no online kernel, one kernel for all users (not safe), or a container/vm per user (Voila, JupyterHub, BinderHub, jupyter-repo2docker), or you can build a WASM app and host it statically (repo2jupyterlite,) so that users run their own code in their own browser.
xoogler/ML researcher here. Everyone at Google used colab because the bazel build process is like 1-2 minutes and even just firing up a fully bazel-built script usually takes several seconds. Frontend is also near impossible to get started with if you aren't in a large established team. Colab is the best solution to a self-inflicted problem, and colab widgets are super popular internally for building hacky apps. This library makes perfect sense in this context...
How could their monorepo build system with distributed build component caching be improved? What is faster at that scale?
FWIU Blaze was rewritten as Bazel without the Omega scheduler integration?
gn wraps Ninja build to build Chromium and Fuschia: https://gn.googlesource.com/gn
reminds me of GWT (google web toolkit)
And Wt and JWt, which also handle the server side in C++ and Java. https://en.wikipedia.org/wiki/Wt_(web_toolkit) :
> The only server-side framework implementing the strategy of progressive enhancement automatically
Is this still true?
Ask HN: Should Python web frameworks care about ASGI?
Hey Everyone
I am the author of a Python framework called Robyn. Robyn is one of the fastest Python web frameworks with a Rust runtime.
Robyn offers a variety of features designed to enhance your web development experience. However, one topic that has sparked mixed feelings within the community is Robyn's choice of not supporting ASGI. I'd love to hear your thoughts on this. Specifically, what specific features of ASGI do you miss in Robyn?
You can find Robyn's documentation here(https://robyn.tech/documentation). We're aiming for a v1.0 release soon, and your feedback will be invaluable in determining whether introducing ASGI support should be a priority.
Please avoid generic responses like "ASGI is a standard and should be supported." Instead, I request some tangible insights and evidence-based arguments to help me understand the tangible benefits ASGI could bring to Robyn or the lack of a specific ASGI feature that will hinder you from using Robyn.
Looking forward to your input!
Robyn - https://github.com/sparckles/robyn Docs - https://robyn.tech/documentation
If Robyn is not an instrumented SSL terminating load balancer with HTTP/3 support, there must still be an upstream HTTP server that could support ASGI.
ASGI (like WSGI) makes it possible to compose an application with reusable, tested "middleware" because there's an IInterface there.
awesome-asgi: https://github.com/florimondmanca/awesome-asgi
ASGI docs > Implementations: https://asgi.readthedocs.io/en/latest/implementations.html
> If Robyn is not an instrumented SSL terminating load balancer with HTTP/3 support, there must still be an upstream HTTP server that could support ASGI.
See I'd like to believe that. However, there is no ASGI-compliant server that supports this properly(at least in the Python world).
And when the community demands/needs the feauture, I see no reason why I won't bake the above features in Robyn itself.
> reusable, tested "middleware" because there's an IInterface there.
Robyn has a drop-in replacement for these. So, these should not be a problem.
Having said that, I feel I could document these facts a bit boldly to clear out the doubts.
Ventoy – Bootable USB Solution
> No need to update Ventoy when a new distro is released
> Most types of OS supported, 1200+ iso files tested ... 90%+ distros in distrowatch.com supported
Greenwashing - this kind of idea has been floating around for years and I don’t think it’s really that big of a problem
No, we have environmentally and financially unsustainable supply chain dependencies on silicon-grade sand and other gases and minerals.
PCBs are not biodegradable but could be. What is the problem?
You haven't pointed out anything specific to FR4, which is what this would be replacing. This is merely a ploy at getting funding, and I'm very skeptical about it because I've seen 2 or 3 companies do the exact same pitch and fail before.
> The goal: to design and test bio-based multilayer PCBs that reduce environmental impact, without compromising on functionality or performance.
What about cost?
And so instead,
What is a sustainable flame retardant for Graphene Oxide PCBs; and is that a filler?