Contents^
Items^
Better-performing “25519” elliptic-curve cryptography
> The x25519 algorithm also plays a role in post-quantum safe cryptographic solutions, having been included as the classical algorithm in the TLS 1.3 and SSH hybrid scheme specifications for post-quantum key agreement.
Really though? This mostly-untrue statement is the line that warrants adding hashtag #post-quantum-cryptography to the blogpost?
Actually, e.g. rustls added X25519Kyber768Draft00 support this year: https://news.ycombinator.com/item?id=41534500
/?q X25519Kyber768Draft00: https://www.google.com/search?q=X25519Kyber768Draft00
Microsoft's quantum-resistant cryptography is here
> With NIST releasing an initial group of finalized post-quantum encryption standards, we are excited to bring these into SymCrypt, starting with ML-KEM (FIPS 203, formerly Kyber), a lattice-based key encapsulation mechanism (KEM). In the coming months, we will incorporate ML-DSA (FIPS 204, formerly Dilithium), a lattice-based digital signature scheme and SLH-DSA (FIPS 205, formerly SPHINCS+), a stateless hash-based signature scheme.
> In addition to the above PQC FIPS standards, in 2020 NIST published the SP 800-208 recommendation for stateful hash-based signature schemes which are also resistant to quantum computers. As NIST themselves called out, these algorithms are not suitable for general use because their security depends on careful state management, however, they can be useful in specific contexts like firmware signing. In accordance with the above NIST recommendation we have added eXtended Merkle Signature Scheme (XMSS) to SymCrypt, and the Leighton-Micali Signature Scheme (LMS) will be added soon along with the other algorithms mentioned above.
microsoft/SymCrypt /CHANGELOG.md: https://github.com/microsoft/SymCrypt/blob/main/CHANGELOG.md
TIL that SymCrypt builds on Ubuntu: https://github.com/microsoft/SymCrypt/releases :
> Generic Linux AMD64 (x86-64) and ARM64 - built and validated on Ubuntu, but because SymCrypt has very few standard library dependencies, it should work on most Linux distributions
The Rustls TLS Library Adds Post-Quantum Key Exchange Support
- cf article about PQ (2024) https://blog.cloudflare.com/pq-2024/
- rustls-post-quantum: https://crates.io/crates/rustls-post-quantum
- rustls-post-quantum docs: https://docs.rs/rustls-post-quantum/latest/rustls_post_quant... :
> This crate provides a rustls::crypto::CryptoProvider that includes a hybrid [1], post-quantum-secure [2] key exchange algorithm – specifically X25519Kyber768Draft00.
> X25519Kyber768Draft00 is pre-standardization, so you should treat this as experimental. You may see unexpected interop failures, and the algorithm implemented here may not be the one that eventually becomes widely deployed.
> However, the two components of this key exchange are well regarded: X25519 alone is already used by default by rustls, and tends to have higher quality implementations than other elliptic curves. Kyber768 was recently standardized by NIST as ML-KEM-768.
"Module-Lattice-Based Key-Encapsulation Mechanism Standard" KEM: https://csrc.nist.gov/pubs/fips/203/final :
> The security of ML-KEM is related to the computational difficulty of the Module Learning with Errors problem. [...] This standard specifies three parameter sets for ML-KEM. In order of increasing security strength and decreasing performance, these are ML-KEM-512, ML-KEM-768, and ML-KEM-1024.
Breaking Bell's Inequality with Monte Carlo Simulations in Python
Hidden variable theory: https://en.wikipedia.org/wiki/Hidden-variable_theory
Bell test: https://en.wikipedia.org/wiki/Bell_test :
> To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency η [\eta], defined as the probability that a photodetector detects a photon that arrives at it. Anupam Garg and N. David Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of η > 2*sqrt(2)/2~= 0.83 is required for a loophole-free violation.[51] Later Philippe H. Eberhard showed that when using a partially entangled state a loophole-free violation is possible for η>2/3~=0.67 which is the optimal bound for the CHSH inequality.[53] Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for η>(sqrt(5)-1)/2~=0.62 [54]
CHSH inequality: https://en.wikipedia.org/wiki/CHSH_inequality
/sbin/chsh
Isn't it possible to measure the wake of a photon instead of measuring the photon itself; to measure the wake without affecting the boat that has already passed? And shouldn't a simple beam splitter be enough to demonstrate entanglement if there is an instrument with sufficient sensitivity to infer the phase of a passed photon?
This says that intensity is sufficient to read phase: https://news.ycombinator.com/item?id=40492160 :
> "Bridging coherence optics and classical mechanics: A generic light polarization-entanglement complementary relation" (2023) https://journals.aps.org/prresearch/abstract/10.1103/PhysRev... :
>> This means that hard-to-measure optical properties such as amplitudes, phases and correlations—perhaps even these of quantum wave systems—can be deduced from something a lot easier to measure: light intensity
And all it takes to win the game is to transmit classical bits with digital error correction using hidden variables?
From "Violation of Bell inequality by photon scattering on a two-level emitter" https://news.ycombinator.com/item?id=40917761 ... From "Scientists show that there is indeed an 'entropy' of quantum entanglement" (2024) https://news.ycombinator.com/item?id=40396001#40396211 :
> IIRC I read on Wikipedia one day that Bell's actually says there's like a 60% error rate?(!)
That was probably the "Bell test" article, which - IIUC - does indeed indicate that if you can read 62% of the photons you are likely to find a loophole-free violation.
What is the photon detection rate in this and other simulators?
Show HN: Pyrtls, rustls-based modern TLS for Python
"PEP 543 – A Unified TLS API for Python" specified interfaces that would make it easier to use different TLS implementations: https://peps.python.org/pep-0543/#interfaces
alphaXiv: Open research discussion on top of arXiv
Hey alphaxiv, you won’t let me claim some of my preprints, because there’s no match with the email address. Which there can’t be, as we’re only listing a generic first.last@org addresses in the papers. Tried the claiming process twice, nothing happened. Not all papers are on Orcid, so that doesn’t help.
I think it’ll be hard growing a discussion platform, if there’s barriers of entry like that to even populate your profile.
How would you propose making claiming possible without the risk of hijacking/misrepresentation?
The only way I see this working is for paper authors to include their public keys in the paper; preferably as metadata and have them produce a signed message using their private key which allows them to claim the paper.
While the grandparent is understandably disappointed with the current implementation, relying on emails was always doomed from the start.
Given that the paper would have be changed regardless, including the full email address is a relatively easy solution. ORCID is probably easier than requiring public keys and a lot of journals already require them.
W3D Decentralized Identifiers are designed for this use case.
Decentralized identifier: https://en.wikipedia.org/wiki/Decentralized_identifier
W3C TR did-core: "DID Decentralized Identifiers 1.0": https://www.w3.org/TR/did-core/
W3C TR did-use-cases: "Use Cases and Requirements for Decentralized Identifiers" https://www.w3.org/TR/did-use-cases/
"Email addresses are not good 'permanent' identifiers for accounts" (2024) https://news.ycombinator.com/item?id=38823817#38831952
I'm sure that would work, but most researchers already have an ORCID are required to provide it other places anyway.
Charging lithium-ion batteries at high currents first increases lifespan by 50%
But are there risks and thus costs?
Initial Litium deactivation is 30 % compared to 9 % with slow formation loading.
How much capacity is lost as a result of this?
Lithium deactivation is inversely proportional to capacity. We could just add extra capacity to make up for it, though. From there, the battery would maintain capacity for a longer time than before.
Isn't there how much fire risk from charging a _ battery at higher than spec currents?
It's just the single inital charging that has to be at high current.
So for SAFETY then there also shouldn't there be battery fire containment [vessels] for the production manufacturing process?
Absolutely, but I hope there already are measures taken to prevent and contain battery fires during manufacturing.
Same origin: Quantum and GR from Riemannian geometry and Planck scale formalism
ScholarlyArticle: "On the same origin of quantum physics and general relativity from Riemannian geometry and Planck scale formalism" (2024) https://www.sciencedirect.com/science/article/pii/S092765052...
- NewsArticle: "Magical equation unites quantum physics, Einstein’s general relativity in a first" https://interestingengineering.com/science/general-relativit... :
> “We proved that the Einstein field equation from general relativity is actually a relativistic quantum mechanical equation,” the researchers note in their study.
> [...] To link them, the researchers developed a mathematical framework that “Redefined the mass and charge of leptons (fundamental particles) in terms of the interactions between the energy of the field and the curvature of the spacetime.”
> “The obtained equation is covariant in space-time and invariant with respect to any Planck scale. Therefore, the constants of the universe can be reduced to only two quantities: Planck length and Planck time,” the researchers note.
- NewsArticle: "Magical equation unites quantum physics, Einstein’s general relativity in a first" https://interestingengineering.com/science/general-relativit... :
> “We proved that the Einstein field equation from general relativity is actually a relativistic quantum mechanical equation,” the researchers note in their study.
the reddit discussion claiming this is AI nonsense: https://www.reddit.com/r/TheoreticalPhysics/comments/1fbij1e...
Someone could also or instead read:
"Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
From https://news.ycombinator.com/item?id=38871054 .. "Light and gravitational waves don't arrive simultaneously" (2023) https://news.ycombinator.com/item?id=38056295 :
>> In SQS (Superfluid Quantum Space), Quantum gravity has fluid vortices with Gross-Pitaevskii, Bernoulli's, and IIUC so also Navier-Stokes; so Quantum CFD (Computational Fluid Dynamics)
Newly Discovered Antibody Protects Against All Covid-19 Variants
> The technology used to isolate the antibody, termed Ig-Seq, gives researchers a closer look at the antibody response to infection and vaccination using a combination of single-cell DNA sequencing and proteomics.
/? ig-seq [ site:github.com ] : https://www.google.com/search?q=ig-seq+site%3Agithub.com
https://www.illumina.com/science/sequencing-method-explorer/... :
> Rep-Seq is a collective term for repertoire sequencing technologies. DNA sequencing of immunoglobulin genes (Ig-seq) and molecular amplification fingerprinting
> [ Ig-seq] is a targeted gDNA amplification method performed with primers complementary to the rearranged V-region gene (VDJ recombinant). Amplification of cDNA is then performed with the appropriate 5’ primers.
[deleted]
Ten simple rules for scientific code review
> Rule 1: Review code just like you review other elements of your research
> Rule 2: Don’t leave code review to the end of the project
> Rule 3: The ideal reviewer may be closer than you think
> Rule 4: Make it easy to review your code
> Rule 5: Do it in person and synchronously… A. Circulate code in advance. B. Provide necessary context. C. Ask for specific feedback if needed. D. Walk through the code. E. Gather actionable comments and suggestions
> Rule 6:…and also remotely and asynchronously
> Rule 7: Review systematically
> A. Run the code and aim to reproduce the results. B. Read the code through—first all of it, with a focus on the API. C. Ask questions. D. Read the details—focus on modularity and design, E. Read the details—focus on the math, F. Read the details—focus on performance, G. Read the details—focus on formatting, typos, comments, documentation, and overall code clarity
> Rule 8: Know your limits
> Rule 9: Be kind
> Rule 10: Reciprocate
Quantum error correction below the surface code threshold
"Quantum error correction below the surface code threshold" (2024) https://arxiv.org/abs/2408.13687 :
> Abstract: Quantum error correction provides a path to reach practical quantum computing by combining multiple physical qubits into a logical qubit, where the logical error rate is suppressed exponentially as more qubits are added. However, this exponential suppression only occurs if the physical error rate is below a critical threshold. In this work, we present two surface code memories operating below this threshold: a distance-7 code and a distance-5 code integrated with a real-time decoder. The logical error rate of our larger quantum memory is suppressed by a factor of Λ = 2.14 ± 0.02 when increasing the code distance by two, culminating in a 101-qubit distance-7 code with 0.143% ± 0.003% error per cycle of error correction. This logical memory is also beyond break-even, exceeding its best physical qubit's lifetime by a factor of 2.4 ± 0.3. We maintain below-threshold performance when decoding in real time, achieving an average decoder latency of 63 μs at distance-5 up to a million cycles, with a cycle time of 1.1 μs. To probe the limits of our error-correction performance, we run repetition codes up to distance-29 and find that logical performance is limited by rare correlated error events occurring approximately once every hour, or 3 × 109 cycles. Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms.
Cavity-Mediated Entanglement of Parametrically Driven Spin Qubits via Sidebands
From "Cavity-Mediated Entanglement of Parametrically Driven Spin Qubits via Sidebands" (2024) https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan... :
> We show that the sidebands generated via the driving fields enable highly tunable qubit-qubit entanglement using only ac control and without requiring the qubit and cavity frequencies to be tuned into simultaneous resonance. The model we derive can be mapped to a variety of qubit types, including detuning-driven one-electron spin qubits in double quantum dots and three-electron resonant exchange qubits in triple quantum dots. The high degree of nonlinearity inherent in spin qubits renders these systems particularly favorable for parametric drive-activated entanglement.
- Article: "Radical quantum computing theory could lead to more powerful machines than previously imagined" https://www.livescience.com/technology/computing/radical-qua... :
> Each qubit operates in a given frequency.
> These qubits can then be stitched together through quantum entanglement — where their data is linked across vast separations over time or space — to process calculations in parallel. The more qubits are entangled, the more exponentially powerful a quantum computer will become.
> Entangled qubits must share the same frequency. But the study proposes giving them "extra" operating frequencies so they can resonate with other qubits or work on their own if needed.
Is there an infinite amount of quantum computational resources in the future that could handle today's [quantum] workload?
Rpi-open-firmware: open-source VPU side bootloader for Raspberry Pi
> Additionally, there is a second-stage chainloader running on ARM capable of initializing eMMC, FAT, and the Linux kernel.
There's now (SecureBoot) UEFI firmware for Rpi3+, so grub and systemd-boot should work on Raspberry Pis: https://raspberrypi.stackexchange.com/questions/99473/why-is... :
> The Raspberry Pi is special that the primary (on-chip ROM) , secondary (bootcode.bin) and third bootloader (start.elf) are executed on its GPU, one chainloading the other. The instruction set is not properly documented and start.elf
> What can be done (as SuSE and Microsoft have demonstrated) is to replace kernel.img at will - even with a custom version of TianoCore (an open-source UEFI implementation) or U-Boot. This can then be used to start an UEFI-compatible GRUB2 or BOOTMGR binary.
"UEFI Secure Boot on the Raspberry Pi" (2023) https://news.ycombinator.com/item?id=35815382
AlphaProteo generates novel proteins for biology and health research
> Trained on vast amounts of protein data from the Protein Data Bank (PDB) and more than 100 million predicted structures from AlphaFold, AlphaProteo has learned the myriad ways molecules bind to each other. Given the structure of a target molecule and a set of preferred binding locations on that molecule, AlphaProteo generates a candidate protein that binds to the target at those locations.
Show HN: An open-source implementation of AlphaFold3
Hi HN - we’re the founders of Ligo Biosciences and are excited to share an open-source implementation of AlphaFold3, the frontier model for protein structure prediction.
Google DeepMind and their new startup Isomorphic Labs, are expanding into drug discovery. They developed AlphaFold3 as their model to accelerate drug discovery and create demand from big pharma. They already signed Novartis and Eli Lilly for $3 billion - Google’s becoming a pharma company! (https://www.isomorphiclabs.com/articles/isomorphic-labs-kick...)
AlphaFold3 is a biomolecular structure prediction model that can do three main things: (1) Predict the structure of proteins; (2) Predict the structure of drug-protein interactions; (3) Predict nucleic acid - protein complex structure.
AlphaFold3 is incredibly important for science because it vastly accelerates the mapping of protein structures. It takes one PhD student their entire PhD to do one structure. With AlphaFold3, you get a prediction in minutes on par with experimental accuracy.
There’s just one problem: when DeepMind published AlphaFold3 in May (https://www.nature.com/articles/s41586-024-07487-w), there was no code. This brought up questions about reproducibility (https://www.nature.com/articles/d41586-024-01463-0) as well as complaints from the scientific community (https://undark.org/2024/06/06/opinion-alphafold-3-open-sourc...).
AlphaFold3 is a fundamental advance in structure modeling technology that the entire biotech industry deserves to be able to reap the benefits from. Its applications are vast, including:
- CRISPR gene editing technologies, where scientists can see exactly how the DNA interacts with the scissor Cas protein;
- Cancer research - predicting how a potential drug binds to the cancer target. One of the highlights in DeepMind’s paper is the prediction of a clinical KRAS inhibitor in complex with its target.
- Antibody / nanobody to target predictions. AlphaFold3 improves accuracy on this class of molecules 2 fold compared to the next best tool.
Unfortunately, no companies can use it since it is under a non-commercial license!
Today we are releasing the full model trained on single chain proteins (capability 1 above), with the other two capabilities to be trained and released soon. We also include the training code. Weights will be released once training and benchmarking is complete. We wanted this to be truly open source so we used the Apache 2.0 license.
Deepmind published the full structure of the model, along with each components’ pseudocode in their paper. We translated this fully into PyTorch, which required more reverse engineering than we thought!
When building the initial version, we discovered multiple issues in DeepMind’s paper that would interfere with the training - we think the deep learning community might find these especially interesting. (Diffusion folks, we would love feedback on this!) These include:
- MSE loss scaling differs from Karras et al. (2022). The weighting provided in the paper does not downweigh the loss at high noise levels.
- Omission of residual layers in the paper - we add these back and see benefits in gradient flow and convergence. Anyone have any idea why Deepmind may have omitted the residual connections in the DiT blocks?
- The MSA module, in its current form, has dead layers. The last pair weighted averaging and transition layers cannot contribute to the pair representation, hence no grads. We swap the order to the one in the ExtraMsaStack in AlphaFold2. An alternative solution would be to use weight sharing, but whether this is done is ambiguous in the paper.
More about those issues here: https://github.com/Ligo-Biosciences/AlphaFold3
How this came about: we are building Ligo (YC S24), where we are using ideas from AlphaFold3 for enzyme design. We thought open sourcing it was a nice side quest to benefit the community.
For those on Twitter, there was a good thread a few days ago that has more information: https://twitter.com/ArdaGoreci/status/1830744265007480934.
A few shoutouts: A huge thanks to OpenFold for pioneering the previous open source implementation of AlphaFold We did a lot of our early prototyping with proteinFlow developed by Lisa at AdaptyvBio we also look forward to partnering with them to bring you the next versions! We are also partnering with Basecamp Research to supply this model with the best sequence data known to science. Matthew Clark (https://batisio.co.uk) for his amazing animations!
We’re around to answer questions and look forward to hearing from you!
Does this win the Folding@home competition, or is/was that a different goal than what AlphaFold3 and ligo-/AlphaFold3 already solve for?
Folding@Home https://en.wikipedia.org/wiki/Folding@home :
> making it the world's first exaflop computing system
Folding@home and protein structure prediction methods such as AlphaFold address related but different questions. The former intends to describe the process of a protein undergoing folding over time, while the latter tries to determine the most stable conformation of a protein (the end result of folding).
Folding@home uses Rosetta, a physics-based approach that is outperformed by deep learning methods such as AlphaFold2/3.
Folding@home uses Rosetta only to generate initial conformations[1], but the actual simulation is based on Markov State Models. Note that there is another distributed computing project for Rosetta, Rosetta@home.
[1]: https://foldingathome.org/dig-deeper/#:~:text=employing%20Ro...
Kids Should Be Taught to Think Logically
I agree with the article. But I'd go further and say that children should be introduced to, and taught, multiple styles of thinking. And they should understand the strengths and weaknesses of those styles for different tasks and situations.
But logical thought is definitely important. Judging by some people I encounter, even understanding the concepts of "necessary but not sufficient" or "X is-a Y does not mean that Y is-a X" would make a big difference.
There are plenty of lists of thinking styles. I doubt that any of them are exhaustive or discrete. For example:
Critical, creative, analytical, abstract, concrete, divergent/lateral, convergent/vertical.
Or
Synthesist, idealist, pragmatic, analytic, realist.
There are lots of options. My point was really about awareness of the different styles and their advantages/disadvantages.
Inductive, Deductive, Abductive Inference
Reason > Logical reasoning methods and argumentation: https://en.wikipedia.org/wiki/Reason#Logical_reasoning_metho...
Critical Thinking > Logic and rationality > Deduction, abduction and induction ; Critical thinking and rationality: https://en.wikipedia.org/wiki/Critical_thinking#Deduction,_a...
Logic: https://en.wikipedia.org/wiki/Logic :
> Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language.
Logical reasoning: https://en.wikipedia.org/wiki/Logical_reasoning
An argument can be Sound or Unsound; or Cogent or Uncogent.
Exercises I recall:
Underline or Highlight or Annotate the topic sentence / precis sentence (which can be the last sentence of an introductory paragraph),
Underline the conclusion,
Underline and label the premises; P1, P2, P3
Don't trust; Verify the logical form
If P1 then Q
P2
therefore Q
If P1, P2, and P3
P1 kinda
we all like ____
therefore Q
Logic puzzles,"Pete, it's a fool that looks for logic in the chambers of the human heart.", money x3, posturing
Coding on iPad using self-hosted VSCode, Caddy, and code-server
Is it ergonomic to code on a tablet without bci?
https://vscode.dev can connect to a remote vscode instance in a container e.g. over Remote Tunnels ; but browsers trap so many keyboard shortcuts.
Which container with code-server to run to connect to from vscode client?
You can specify a development container that contains code-server with devcontainer.json.
vscode, Codespaces and these tools support devcontainer.json, too:
coder/envbuilder: https://github.com/coder/envbuilder
loft-sh/devpod: https://github.com/loft-sh/devpod
lapce/lapdev: https://github.com/lapce/lapdev
JupyterHub and BinderHub can spawn containers that also run code-server. Though repo2docker and REES don't yet support devcontainer.json, they do support bringing your own Dockerfile.
> but browsers trap so many keyboard shortcuts.
As a result, unfortunately the F1 keyboard shortcut calls browser help not vscode help.
Aren't there browser-based RDP apps that don't mangle all of the shortcuts, or does it have to be fulscreen in a browser to support a VM-like escape sequence to return keyboard input to the host?
OpenSSL 3.4 Alpha 1 Released with New Features
The roadmap on the site is 404'ing and the openssl/web repo is archived?
The v3.4 roadmap entry PR mentions QUIC (which HTTP/3 requires)? https://github.com/openssl/web/pull/481/files
Someday there will probably be a TLS1.4/2.0 with PQ, and also FIPS-140 -4?
Are there additional ways to implement NIST PQ finalist algos with openssl?
open-quantum-safe/oqs-provider: https://github.com/open-quantum-safe/oqs-provider :
openssl list -signature-algorithms -provider oqsprovider
openssl list -kem-algorithms -provider oqsprovider
openssl list -tls-signature-algorithms
Gold nugget formation from earthquake-induced piezoelectricity in quartz
"Gold nugget formation from earthquake-induced piezoelectricity in quartz" (2024) https://www.nature.com/articles/s41561-024-01514-1
Piezoelectric effects are how crystal radios work; AM Amplitude Modulated, shortwave, longwave
Quartz is dielectric.
Quartz clocks, quartz voltimeter
From https://news.ycombinator.com/item?id=40859142 :
> Ancient lingams had Copper (Cu) and Gold (Au), and crystal FWIU.
> From "Can you pump water without any electricity?" https://news.ycombinator.com/item?id=40619745 :
>> - /? praveen mohan lingam: https://www.youtube.com/results?search_query=praveen+mohan+l...
FWIU rotating the lingams causes vibrations which scare birds away.
Piezoelectricity: https://en.wikipedia.org/wiki/Piezoelectricity
"New "X-Ray Vision" Technique Sees Inside Crystals" (2024) https://news.ycombinator.com/item?id=40630832
"Observation of current whirlpools in graphene at room temperature" (2024) https://www.science.org/doi/10.1126/science.adj2167 .. https://news.ycombinator.com/item?id=40360691 :
> Electron–electron interactions in high-mobility conductors can give rise to transport signatures resembling those described by classical hydrodynamics. Using a nanoscale scanning magnetometer, we imaged a distinctive hydrodynamic transport pattern—stationary current vortices—in a monolayer graphene device at room temperature.
"Goldene: New 2D form of gold makes graphene look boring" (2024) https://news.ycombinator.com/item?id=40079905
"Gold nanoparticles kill cancer – but not as thought" (2024) https://news.ycombinator.com/item?id=40819854 :
> Star-shaped gold nanoparticles up to 200nm kill at least some forms of cancer; https://news.ycombinator.com/item?id=40819854
Are there phononic excitations from earthquake-induced piezoelectric and pyroelectric effects?
Do (surface) plasmon polaritons SPhP affect the interactions between quartz and gold, given heat and vibration as from an earthquake?
"Extreme light confinement and control in low-symmetry phonon-polaritonic crystals" like quartz https://arxiv.org/html/2312.06805v2
Dublin Core, what is it good for?
Do regular search engine index DCMI dcterms:? Doe Google Scholar or Google Search index schema.org/CreativeWork yet?
Chrome 130: Direct Sockets API
Mozilla's position is that Direct Sockets would be unsafe and inconsiderate given existing cross-origin expectations FWIU: https://github.com/mozilla/standards-positions/issues/431
Direct Sockets API > Permissions Policy: https://wicg.github.io/direct-sockets/#permissions-policy
docs/explainer.md >> Security Considerations : https://github.com/WICG/direct-sockets/blob/main/docs/explai...
I applaud the Chrome team for implementing
Isolated Web Apps seems like it mitigates majority of the significant concerns Mozilla has.
Without support for Direct Sockets in Firefox, developers have JSONP, HTTP, WebSockets, and WebRTC.
Typically today, a user must agree to install a package that uses L3 sockets before they're using sockets other than DNS, HTTP, and mDNS. HTTP Signed Exchanges is one way to sign webapps.
IMHO they're way too confident in application sandboxing but we already know that we need containers, Gvisor or Kata, and container-selinux to isolate server processes.
Chrome and Firefox and Edge all have the same app sandbox now FWIU. It is contributed to pwn2own ever year. I don't think the application-level browser sandbox has a better record of vulns than containers or VMs.
So, IDK about trusting in-browser isolation features, or sockets with unsigned cross-domain policies.
OTOH things that would work with Direct Sockets IIUC: P2P VPN server/client, blind proxy relay without user confirmation, HTTP server, behind the firewall port scanner that uploads scans,
I can understand FF's position on Direct Sockets.
There used to be a "https server for apps" Firefox extension.
It is still necessary to install e.g Metamask to add millions of lines of unverified browser code and a JS/WASM interpretor to an otherwise secured Zero Trust chain. Without a Wallet Browser extension like Metamask explicitly installed, browsers otherwise must use vulnerable regular DNS instead of EDNS. Without Metamask installed, it's not possible for a compromised browser to hack at a blockchain without a relay because most blockchains specifically avoid regular HTTPS. Existing browsers do not support blockchain protocols without the user approving install of e.g. Metamask over PKI SSL.
FWIU there are many examples of people hijacking computers to mine PoW coins in JS or WASM, and we don't want that to be easier to do without requiring confirmation from easily-fooled users.
Browsers SHOULD indicate when a browser tab is PoW mining in the background as the current user in the background.
Are there downgrade attacks on this?
Don't you need HTTPS to serve the origin policy before switching to Direct Sockets anyway?
HTTP/3 QUIC is built on UDP. Can apps work with WS or WebRTC over HTTP/3 instead of sockets?
Edit: (now I can read the spec in question)
Thanks for your considered response. I will digest it a bit when I have time! :)
Show HN: A retro terminal text editor for GNU/Linux coded in C (C-edit)
I set about coding my own version of the classic MS-DOS EDIT.COM for GNU/Linux systems four years ago and it this is where the project is at... still rough around the edges but works well with Termux! :) Demo: https://www.youtube.com/watch?v=H7bneUX_kVA
QB64 is an EDIT.COM-style IDE and a compiler for QuickBasic .BAS programs: https://github.com/QB64Official/qb64#usage
There's a QBjs, for QuickBasic on the web.
There's a QB64 vscode extension: https://github.com/QB64Official/vscode
Textual has a MarkdownViewer TUI control with syntax highlighting and a file tree in a side panel like NERDtree, but not yet a markdown editor.
QuickBasic was my first programming language and EDIT.COM was my first IDE. I love going back down memory lane, thanks!
Same. `edit` to edit. These days perhaps not coincidentally I have a script called `e` for edit that opens vim: https://github.com/westurner/dotfiles/blob/develop/scripts/e
GORILLA.BAS! https://en.wikipedia.org/wiki/Gorillas_(video_game)
gorilla.bas with dosbox in html: https://archive.org/details/GorillasQbasic
rewritten with jquery: https://github.com/theraccoonbear/BrowserGORILLAS.BAS/blob/m...
Basically the same thing but for learning, except you can't change the constants in the simulator by editing the source of the game with Ctrl-C and running it with F5:
- PHET > Projectile Data Lab https://phet.colorado.edu/en/simulations/projectile-data-lab
- sensorcraft is like minecraft but in python with pyglet for OpenGL 3D; self.add_block(), gravity, ai, circuits: https://sensorcraft.readthedocs.io/en/stable/
PyTorch 2.4 Now Supports Intel GPUs for Faster Workloads
Which Intel GPUs? Is this for the Max Series GPUs and/or the Gaudi 3 launching in 3Q?
"Intel To Sunset First-Gen Max Series GPU To Focus On Gaudi, Falcon Shores Chips" (2024-05) https://www.crn.com/news/components-peripherals/2024/intel-s...
It’ll be something that supports oneAPI, so the PVC 1100/1500, the Flex GPUs, the A-series desktop GPUs (and B-series when it comes out). Not Gaudi 3 afaict, that has a custom toolchain (but Falcon Shores should be on oneAPI).
SDL3 new GPU API merged
SDL3 is still in preview, but the new GPU API is now merged into the main branch while SDL3 maintainers apply some final tweaks.
As far as I understand: the new GPU API is notable because it should allow writing graphics code & shaders once and have it all work cross-platform (including on consoles) with minimal hassle - and previously that required Unity or Unreal, or your own custom solution.
WebGPU/WGSL is a similar "cross-platform graphics stack" effort but as far as I know nobody has written console backends for it. (Meanwhile the SDL3 GPU API currently doesn't seem to support WebGPU as a backend.)
Why is SDL API needed vs gfx-rs / wgpu though? I.e. was there a need to make yet another one?
Having a C API like that is always nice. I don't wanna fight Rust.
WebGPU has a (mostly) standardized C API: https://github.com/webgpu-native/webgpu-headers
wgpu supports WebGPU: https://github.com/gfx-rs/wgpu :
> While WebGPU does not support any shading language other than WGSL, we will automatically convert your non-WGSL shaders if you're running on WebGPU.
That’s just for the shading language
The Rust wgpu project has an alternative C API which is identical (or at least closely matches, I haven't looked in detail at it yet) the official webgpu.h header. For instance all examples in here are written in C:
https://github.com/gfx-rs/wgpu-native/tree/trunk/examples
There's definitely also people using wgpu from Zig via the C bindings.
I found:
shlomnissan/sdl-wasm: https://github.com/shlomnissan/sdl-wasm :
> A simple example of compiling C/SDL to WebAssembly and binding it to an HTML5 canvas.
erik-larsen/emscripten-sdl2-ogles2: https://github.com/erik-larsen/emscripten-sdl2-ogles2 :
> C++/SDL2/OpenGLES2 samples running in the browser via Emscripten
IDK how much work there is to migrate these to SDL3?
Are there WASM compilation advantages to SDL3 vs SDL2?
There are rust SDL2 bindings: https://github.com/Rust-SDL2/rust-sdl2#use-sdl2render
use::sdl2render, gl-rs for raw OpenGL: https://github.com/Rust-SDL2/rust-sdl2?tab=readme-ov-file#op...
*sdl2::render
src/sdl2/render.rs: https://github.com/Rust-SDL2/rust-sdl2/blob/master/src/sdl2/...
SDL/test /testautomation_render.c: https://github.com/libsdl-org/SDL/blob/main/test/testautomat...
SDL is for gamedevs, it supports consoles, wgpu is not, it doesn't
SDL is for everyone. I use it for a terminal emulator because it’s easier to write something cross platform in SDL than it is to use platform native widgets APIs.
Can the SDL terminal emulator handle up-arrow /slash commands, and cool CLI things like Textual and IPython's prompt_toolkit readline (.inputrc) alternative which supports multi line editing, argument tab completion, and syntax highlighting?, in a game and/or on a PC?
I think you're confusing the roles of terminal emulator and shell. The emulator mainly hosts the window for a text-based application: print to the screen, send input, implement escape sequences, offer scrollback, handle OS copy-paste, etc. The features you mentioned would be implemented by the hosted application, such as a shell (which they've also implemented separately).
Does the SDL terminal emulator support enough of VT100, is it, to host an [in-game] console shell TUI with advanced features?
I'm not related to hnlmorg, but I'm assuming the project they refer to is mxtty [1], so check for yourself.
I’d love to see Raylib get an SDL GPU backend. I’d pick it up in a heartbeat.
raylib > "How to compile against the [SDL2] backend" https://github.com/raysan5/raylib/discussions/3764
Rust solves the problem of incomplete Kernel Linux API docs
This is one of the biggest advantages of the newer wave of more expressively typed languages like Rust and Swift.
They remove a lot of ambiguity in how something should be held.
Is this data type or method thread safe? Well I don’t need to go look up the docs only to find it’s not mentioned anywhere but in some community discussion. The compiler tells me.
Reviewing code? I don’t need to verify every use of a pointer is safe because the code tells me itself at that exact local point.
This isn’t unique to the Linux kernel. This is every codebase that doesn’t use a memory safe language.
With memory safe languages you can focus so much more on the implementation of your business logic than making sure all your codebases invariants are in your head at a given time.
It's not _just_ memory safety. In my experience, Rust is also liberating in the sense of mutation safety. With memory safe languages such as Java or Python or JavaScript, I must paranoidly clone stuff when passing stuff to various functions whose behaviour I don't intimately know of, and that is a source of constant stress for me.
Also newtype wrappers.
If you have code that deals e.g. with pounds and kilograms, Dollars and Euros, screen coordinates and window coordinates, plain text and HTML and so on, those values are usually encapsulated in safe wrapper structs instead of being passed as raw ints or floats.
This prevents you from accidentally passing the wrong kind of value into a function, and potentially blowing up your $125 million spacecraft[1].
I also find that such wrappers also make the code far more readable, as there's no confusion exactly what kind of value is expected.
[1] https://www.simscale.com/blog/nasa-mars-climate-orbiter-metr...
As much as I like Rust, I don’t think that it would have solved the Mars Climate Orbiter problem. That was caused by one party writing numbers out into a CSV file in one unit, and a different party reading the CSV file but assuming that the numbers were in a different unit. Both parties could have been using Rust, and using types to encode physical units, and the problem could still have happened.
W3C CSVW supports per-column schema.
Serialize a dict containing a value with uncertainties and/or Pint (or astropy.units) and complex values to JSON, then read it from JSON back to the same types. Handle datetimes, complex values, and categoricals
"CSVW: CSV on the Web" https://github.com/jazzband/tablib/issues/305
7 Columnar metadata header rows for a CSV: column label, property URI, datatype, quantity/unit, accuracy, precision, significant figures https://wrdrd.github.io/docs/consulting/linkedreproducibilit...
CSV on the Web: A Primer > 6. Advanced Use > 6.1 How do you support units of measure? https://www.w3.org/TR/tabular-data-primer/#units-of-measure
You can specify units in a CSVW file with QUDT, an RDFS schema and vocabulary for Quantities, Units, Dimensions, and Types
Schema.org has StructuredValue and rdfs:subPropertyOf like QuantitativeValue and QuantitativeValueDistribution: https://schema.org/StructuredValue
There are linked data schema for units, and there are various in-code in-RAM typed primitive and compound type serialization libraries for various programming languages; but they're not integrated, so we're unable to share data with units between apps and fall back to CSVW.
There are zero-copy solutions for sharing variables between cells of e.g. a polyglot notebook with input cells written in more than one programing language without reshaping or serializing.
Sandbox Python notebooks with uv and marimo
Package checksums can be specified per-platform in requirements.txt and Pipfile.lock files. Is it advisable to only list package names, in TOML that can't be parsed from source comments with the AST parser?
.ipynb nbformat inlines binary data outputs from e.g. _repr_png_() as base64 data, rather than delimiting code and binary data in .py files.
One file zipapp PEX Python Executables are buildable with Twitter Pants build, Buck, and Bazel. PEX files are executable ZIP files with a Python header.
There's a GitHub issue about a Markdown format for notebooks because percent format (`# %%`) and myst-nb formats don't include outputs.
There's also GitHub issue about Jupytext storing outputs
Containers are sandboxes. repo2docker with repo2podman sandboxes a git repo with notebooks by creating a container with a recent version of the notebook software on top.
How does your solution differ from ipyflow and rxpy, for example? Are ipywidgets supported or supportable?
> Is it advisable to only list package names, in TOML that can't be parsed from source comments with the AST parser?
This at the top of a notebook is less reproducible and less secure than a requirements.txt with checksums or better:
%pip install ipytest pytest-cov jupyterlab-miami-nights
%pip install -q -r requirements.txt
%pip?
%pip --help
But you don't need to run pip every time you run a notebook, so it's better to comment out the install steps. But then it requires manual input to run the notebook in a CI task, if it doesn't install everything in a requirements.txt and/or environment.yml or [Jupyter REES] first before running the notebook like repo2docker: #%pip install -r requirements.txt
#!pip --help
#!mamba env update -f environment.yml
#!pixi install --manifest-path pyproject.toml_or_pixi.toml
Lucee: A light-weight dynamic CFML scripting language for the JVM
Similar to ColdFusion CFML and Open Source: ZPT Zope Templates: https://zope.readthedocs.io/en/latest/zopebook/ZPT.html
https://zope.readthedocs.io/en/latest/zopebook/AppendixC.htm... :
> Zope Page Templates are an HTML/XML generation tool. This appendix is a reference to Zope Page Templates standards: Template Attribute Language (TAL), TAL Expression Syntax (TALES), and Macro Expansion TAL (METAL).
> TAL Overview: The Template Attribute Language (TAL) standard is an attribute language used to create dynamic templates. It allows elements of a document to be replaced, repeated, or omitted
Repoze is Zope backwards. PyPI is built on the Pyramid web framework, which comes from repoze.bfg, which was written in response to Zope2 and Plone and Zope3.
Chameleon ZPT templates with Pyramid pyramid.renderers.render_to_response(request): https://docs.pylonsproject.org/projects/pyramid/en/latest/na...
FCast: Casting Made Open Source
How does FCast differ from Matter Casting?
https://news.ycombinator.com/item?id=41171060&p=2#41172407
"What Is Matter Casting and How Is It Different From AirPlay or Chromecast?" (2024) https://www.howtogeek.com/what-is-matter-casting-and-how-is-... :
> You can also potentially use the new casting standard to control some of your TV’s functions while casting media on it, a task at which both AirPlay and Chromecast are somewhat limited.
Feature ideas: PIP Picture-in-Picture, The ability to find additional videos and add to a [queue] playlist without stopping the playing video
Instead of waiting, hoping that big companies will implement the standard. Just make it as easy as possible to adopt by having receivers for all platforms and making client libraries that can cast to AirPlay, Chromecast, FCast and others seamlessly.
Doest current app support AirPlay and Chromecast as different receivers/backends? Websites doesn't mention anything about it. Is there any plan also for iOS app?
Some feedback: I would also add dedicated buttons for downloading MacOS and Windows binaries - for typical users gitlab button will be to scary. Website also not clear if there is and SDK for developers (the one that supports also airplay and chromecast) and what languages bindings it supports.
GitHub has package repos for hosting package downloads.
A SLSA Builder or Generator can sign packages and container images with sigstore/cosign.
It's probably also possible to build and sign a repo metadata index with GitHub release attachment URLs and host that on GitHub Pages, but at scale to host releases you need a CDN and release signing keys to sign the repo metadata, and clients that update only when the release attachment signature matches the per-release per-platform key; but the app store does that for you
Show HN: bpfquery – experimenting with compiling SQL to bpf(trace)
Hello! The last few weeks I've been experimenting with compiling sql queries to bpftrace programs and then working with the results. bpfquery.com is the result of that, source available at https://github.com/zmaril/bpfquery. It's a very minimal sql to bpftrace compiler that lets you explore what's going on with your systems. It implements queries, expressions, and filters/wheres/predicates, and has a streaming pivot table interface built on https://perspective.finos.org. I am still figuring out how to do windows, aggregations and joins though, but the pivot table interface actually lets you get surprisingly far. I hope you enjoy it!
RIL about how the ebpf verifier attempts to prevent infinite loops given rule ordering and rewriting transformations.
There are many open query planners; maybe most are hardly reusable.
There's a wasm-bpf; and also duckdb-wasm, sqlite in WASM with replication and synchronization, datasette-lite, JupyterLite
wasm-bpf: https://github.com/eunomia-bpf/wasm-bpf#how-it-works
Does this make databases faster or more efficient? Is there process or query isolation?
The Surprising Cause of Qubit Decay in Quantum Computers
> The transmission of supercurrents is made possible by the Josephson effect, where two closely spaced superconducting materials can support a current with no applied voltage. As a result of the study, previously unattributed energy loss can be traced to thermal radiation originating at the qubits and propagating down the leads.
> Think of a campfire warming someone at the beach – the ambient air stays cold, but the person still feels the warmth radiating from the fire. Karimi says this same type of radiation leads to dissipation in the qubit.
> This loss has been noted before by physicists who have conducted experiments on large arrays of hundreds of Josephson junctions placed in circuit. Like a game of telephone, one of these junctions would seem to destabilize the rest further down the line.
ScholarlyArticle: "Bolometric detection of Josephson radiation" (2024) https://www.nature.com/articles/s41565-024-01770-7
"Computer Scientists Prove That Heat Destroys Quantum Entanglement" (2024) https://news.ycombinator.com/item?id=41381849
The Future of TLA+ [pdf]
A TLA+ alternative people might find curious.
What are other limits and opportunities for TLA+ and similar tools?
Limits of TLA+
- It cannot compile to working code
- Steep learning curve
Opportunities for TLA+
- Helps you understand complex abstractions & systems clearly.
- It's extremely effective at communicating the components that make up a system with others.
Let get give you a real practical example.
In the AI models there is this component called a "Transformer". It under pins ChatGPT (the "T" in ChatGPT).
If you are to read the 2018 Transfomer paper "Attention is all you need".
They use human language, diagrams, and mathematics to describe their idea.
However if your try to build you own "Transformer" using that paper as your only resource your going to struggle interpreting what they are saying to get working code.
Even if you get the code working, how sure are you that what you have created is EXACTLY what the authors are talking about?
English is too verbose, diagrams are open to interpretation & mathematics is too ambiguous/abstract. And already written code is too dense.
TLA+ is a notation that tends to be used to "specify systems".
In TLA+ everything is a defined in terms of a state machine. Hardware, software algorithms, consensus algorithms (paxos, raft etc).
So why TLA+?
If something is "specified" in TLA+;
- You know exactly what it is — just by interpreting the TLA+ spec
- If you have an idea to communicate. TLA+ literate people can understand exactly what your talking about.
- You can find bugs in an algorithms, hardware, proceseses just by modeling them in TLA+. So before building Hardware or software you can check it's validity & fix flaws in its design before committing expensive resources only to subsequently find issues in production.
Is that a practical example? Has anyone specified a transformer using TLA+? More generally, is TLA+ practical for code that uses a lot of matrix multiplication?
The most practical examples I’m aware of are the usage of TLA+ to specify systems at AWS: https://lamport.azurewebsites.net/tla/formal-methods-amazon....
From "Use of Formal Methods at Amazon Web Services" (2014) https://lamport.azurewebsites.net/tla/formal-methods-amazon.... :
> What Formal Specification Is Not Good For: We are concerned with two major classes of problems with large distributed systems: 1) bugs and operator errors that cause a departure from the logical intent of the system, and 2) surprising ‘sustained emergent performance degradation’ of complex systems that inevitably contain feedback loops. We know how to use formal specification to find the first class of problems. However, problems in the second category can cripple a system even though no logic bug is involved. A common example is when a momentary slowdown in a server (perhaps due to Java garbage collection) causes timeouts to be breached on clients, which causes the clients to retry requests, which adds more load to the server, which causes further slowdown. In such scenarios the system will eventually make progress; it is not stuck in a logical deadlock, livelock, or other cycle. But from the customer's perspective it is effectively unavailable due to sustained unacceptable response times. TLA+ could be used to specify an upper bound on response time, as a real-time safety property. However, our systems are built on infrastructure (disks, operating systems, network) that do not support hard real-time scheduling or guarantees, so real-time safety properties would not be realistic. We build soft real-time systems in which very short periods of slow responses are not considered errors. However, prolonged severe slowdowns are considered errors. We don’t yet know of a feasible way to model a real system that would enable tools to predict such emergent behavior. We use other techniques to mitigate those risks.
Delay, cycles, feedback; [complex] [adaptive] nonlinearity
Formal methods including TLA+ also can't/don't prevent or can only workaround side channels in hardware and firmware that is not verified. But that's a different layer.
> This raised a challenge; how to convey the purpose and benefits of formal methods to an audience of software engineers? Engineers think in terms of debugging rather than ‘verification’, so we called the presentation “Debugging Designs” [8] . Continuing that metaphor, we have found that software engineers more readily grasp the concept and practical value of TLA+ if we dub it:
Exhaustively testable pseudo-code
> We initially avoid the words ‘formal’, ‘verification’, and ‘proof’, due to the widespread view that formal
methods are impractical. We also initially avoid mentioning what the acronym ‘TLA’ stands for, as doing
so would give an incorrect impression of complexity.Isn't there a hello world with vector clocks tutorial? A simple, formally-verified hello world kernel module with each of the potential methods would be demonstrative, but then don't you need to model the kernel with abstract distributed concurrency primitives too?
From https://news.ycombinator.com/item?id=40980370 ;
> - [ ] DOC: learnxinyminutes for tlaplus
> TLAplus: https://en.wikipedia.org/wiki/TLA%2B
> awesome-tlaplus > Books, (University) courses teaching (with) TLA+: https://github.com/tlaplus/awesome-tlaplus#books
FizzBee, Nagini, deal-solver, z3, dafny; https://news.ycombinator.com/item?id=39904256#39938759 ,
"Industry forms consortium to drive adoption of Rust in safety-critical systems" (2024) https://news.ycombinator.com/item?id=40680722
awesome-safety-critical:
Computer Scientists Prove That Heat Destroys Quantum Entanglement
This is really not a surprise. And I'd like to clarify more misconceptions of modern science:
It isn't their identity the atoms give up, their hyperdimensional vibration synchronizes.
This is puzzling and spooky enough to warrant all of those flavored adjectives because science is convinced the Universe is smallest states built up into our universe, when in actuality the Universe is a singularity of potential which distributes itself (over nothing), and state continuously resolves through constructive and destructive interference (all space/time manifestation), probably bound by spin characteristic (like little knots which cannot unwind themselves.)
As universal potential exists outside of space and time (giving rise to it, an alt theory to the big bang), when particles are synchronizes (at any distance) the disposition (not identity), of their potentials are bound (identity would be the localized existential particle). Any destructive interference upon the hyperdimensional vibration will destroy the entanglement.
The domain to be explored is, what can we do with constructive interference?
Modern science worshipers will have to bite on this one, and admit their sacred axioms are wrong.
So, modulating thermal insulation of (a non-superconducting and non-superfluidic or any) quantum simulator results in loss of entanglement.
How, then, can entanglement across astronomical distances occur without cooler temps the whole way there, if heat destroys all entanglement?
Would helical polarization like quasar astrophysical jets be more stable than other methods for entanglement at astronomical distances?
Btw, entangled swarms can be re-entangled over and over and over. The same entangled scope.
The entanglement is the vibrational synchrony in the zero dimensional Universal Potential (or just capacity for existential Potential to sound less grandeur.)
So everything at that vibrational axis will bias to the same disposition.
There are more vibrational axis than atoms in the Universe, these are the purely random Q we expect when made decoharent.
Synchronized, our technology will come by how we may constructively and destructively interfere with the informational content of the "disposition". Any purturbence is informational , and I think you can broadcast analog motion pictures, in hyperdimensional clarity with sound and warm lighting; qubits are irrelevant and a dead end.
Quantum holography is a sieve of some sort, where shadows may be cast upon extradimentional walls.
Hash collisions may be found by superimposing the probabilities factored by their algorithms such that constructive and destructive interference reduce the multidimensional model to the smallest probable sets.
Cyphers may be broken by combining constructive key attempts (thousands of millions at a time?) in a "singular domain" with the hash collision solution.
Heat at some level, not every level. Same goes for magnets or heck, even biological specimens.
Computer haxor discovers that heat destroys all signs of life in organic material.
Sensitive apparatus requires insulation.
98% accuracy in predicting diseases by the colour of the tongue
"Tongue Disease Prediction Based on Machine Learning Algorithms" (2024) https://www.mdpi.com/2227-7080/12/7/97 :
> This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%, while the NB algorithm had the lowest accuracy, with 91.43%
"Development of Deep Ensembles to Screen for Autism and Symptom Severity Using Retinal Photographs" (2023) https://jamanetwork.com/journals/jamanetworkopen/fullarticle... ; 100% accuracy where a recently-published best method with charts is 80% accurate: "Machine Learning Prediction of Autism Spectrum Disorder from a Minimal Set of Medical and Background Information" (2024) https://jamanetwork.com/journals/jamanetworkopen/fullarticle... https://github.com/Tammimies-Lab/ASD_Prediction_ML_Rajagopal...
I don't think any of the medical imaging NN training projects have similar colorspace analysis.
The possibilities for dark matter have shrunk
There is no dark matter in theories of superfluid quantum gravity.
Dirac later went to Dirac sea. Gödel had already proposed dust solutions, which are fluidic.
PBS Spacetime has a video out now on whether gravity is actually random. I don't know whether it addresses theories of superfluid quantum gravity.
>>> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2015) https://hal.science/hal-01248015/
Database “sharding” came from Ultima Online? (2009)
Wikipedia has:
Shard (database architecture) > Etymology: https://en.wikipedia.org/wiki/Shard_(database_architecture)#...
Partition > Horizontal partitioning --> Sharding: https://en.wikipedia.org/wiki/Partition_(database)
Database scalability > Techniques > Partitioning: https://en.wikipedia.org/wiki/Database_scalability
Network partition: https://en.wikipedia.org/wiki/Network_partition
why is every social media network now just "post something with varying levels of correctness because it farms engagement"
it's so exhausting needing to just read comments to get the actual, real truth
Do you realize that the linked Wikipedia post agrees with the article? It lists Ultima Online as one of two likely sources for the term "sharding."
Brain found to store three copies of every memory
I think the evidence for this should be extraordinary because it is so apparently unlikely.
Why would the brain store multiple copies of a memory? It’s so inefficient.
On a small level computers do that: cpu cache (itself 3 levels), GPU memory (optional), main memory, hdd cache.
The reason why animal brains would store multiple copies of a memory is functional proximity. You need memory near the limbic system for neurotic processing and fear conditioning. Long term storage is near the brain stem so that it can eventually bleed into the cerebellum to become muscle memory. There is memory storage near the frontal lobes so that people can reason about with their advanced processors like speech parsing, visual cortex, decision bias, and so forth.
OT ScholarlyArticle: "Divergent recruitment of developmentally defined neuronal ensembles supports memory dynamics" (2024) https://www.science.org/doi/10.1126/science.adk0997
"Our Brains Instantly Make Two Copies of Each Memory" (2017) https://www.pbs.org/wgbh/nova/article/our-brains-instantly-m... :
> We might be wrong. New research suggests that our brains make two copies of each memory in the moment they are formed. One is filed away in the hippocampus, the center of short-term memories, while the other is stored in cortex, where our long-term memories reside.
From https://www.psychologytoday.com/us/blog/the-athletes-way/201... :
> Surprisingly, the researchers found that long-term memories remain "silent" in the prefrontal cortex for about two weeks before maturing and becoming consolidated into permanent long-term memories.
ScholarlyArticle: "Engrams and circuits crucial for systems consolidation of a memory" (2017) https://www.science.org/doi/10.1126/science.aam6808
So that makes four (4) copies of each memory in the brain if you include the engram cells in the prefrontal cortex.
What is the survival advantage to redundant, resilient recall; why do brains with such traits survive and where and when in our evolutionary lineage did such complexity arise?
"Spatial Grammar" in DNA: Breakthrough Could Rewrite Genetics Textbooks
ScholarlyArticle: "Position-dependent function of human sequence-specific transcription factors" (2024) https://www.nature.com/articles/s41586-024-07662-z
Celebrating 6 years since Valve announced Steam Play Proton for Linux
I've seen it joked that with Proton, Win32 is a good stable ABI for gaming on Linux.
Given that, and that I'm most comfortable developing on Linux and in C++, does anyone have a good toolchain recommendation for cross compiling C++ to Windows from Linux? (Ideally something that I can point CMake at and get a build that I can test in Proton.)
MinGW works well (e.g. mingw-w64 in Debian/Ubuntu). It works well with CMake, you just need to pass CMake flags like
-DCMAKE_CXX_COMPILER=x86_64-w64-mingw32-c++ -DCMAKE_C_COMPILER=x86_64-w64-mingw32-c
Something like SDL or Raylib is useful for cross platform windowing and sound if you are writing gamesHey, TuxMath is written with SDL, though there's a separate JS port now IIUC.
/? mingw: https://github.com/search?q=mingw&type=repositories
The msys2/MINGW-packages are PKGBUILD packages: https://github.com/msys2/MINGW-packages :
> Package scripts for MinGW-w64 targets to build under MSYS2.
> [..., SDL, gles, glfw, egl, glbinding, cargo-c, gtest, cppunit, qt5, gtk4, icu, ki-i18n-qt5, SDL2_pango, jack2, gstreamer, ffmpeg, blender, gegl, gnome-text-editor, gtksourceview, kdiff3, libgit2, libusb, libressl, libsodium, libserialport, libslirp, hugo]
PKGBUILD is a bash script packaging spec from Arch, which builds packages with makepkg.
msys2/setup-msys2: https://github.com/msys2/setup-msys2:
> setup-msys2 is a GitHub Action (GHA) to setup an MSYS2 environment (i.e. MSYS, MINGW32, MINGW64, UCRT64, CLANG32, CLANG64 and/or CLANGARM64 shells)
Though, if you're writing a 3d app/game, e.g. panda3d already builds for Windows, Mac, and Linux; and pygbag compiles panda3d to WASM, and there's also harfang-wasm, and TIL about leptos is more like React in Rust and should work for GUI apps, too.
panda3d > Building applications: https://docs.panda3d.org/1.11/python/distribution/building-b...
https://github.com/topics/mingw :
> nCine, win-sudo, drmingw debugger,
mstorsjo/llvm-mingw: https://github.com/mstorsjo/llvm-mingw :
> Address Sanitizer and Undefined Behaviour Sanitizer, LLVM Control Flow Guard -mguard=cf ; i686, x86_64, armv7 and arm64
msys2/MINGW-packages: "[Wish] Add 3D library for Python" https://github.com/msys2/MINGW-packages/issues/21325
panda3d/panda3d: https://github.com/panda3d/panda3d
Before Steam for Linux, there was tuxmath.
tux4kuds/tuxmath//mingw/ has CodeBlocks build config with mingw32 fwics: https://github.com/tux4kids/tuxmath/blob/master/mingw/tuxmat...
Code::Blocks: https://en.m.wikipedia.org/wiki/Code::Blocks
There's a CMake build: https://github.com/tux4kids/tuxmath/blob/master/CMakeLists.t...
But it says the autotools build is still it; configure.ac for autoconf and Makefile.am for automake
SDL supports SVG since SDL_image 2.0.2 with IMG_LoadSVG_RW() and since SDL_image 2.6.0 with IMG_LoadSizedSVG_RW: https://wiki.libsdl.org/SDL2_image/IMG_LoadSizedSVG_RW
conda-forge has SDL on Win/Mac/Lin.
conda-forge/sdl2-feedstock: https://github.com/conda-forge/sdl2-feedstock
emscripten-forge does not yet have SDL or .*gl.* or tuxmath or bash or busybox: https://github.com/emscripten-forge/recipes/tree/main/recipe...
conda-forge/panda3d-feedstock / recipe/meta.yaml builds panda3d on Win, Mac, Lin, and Github Actions containers: https://github.com/conda-forge/panda3d-feedstock/blob/main/r... :
conda install -c conda-forge panda3d # mamba install panda3d # miniforge
# pip install panda3d
Panda3d docs > Distributing Panda3D Applications > Third-party dependencies lists a few libraries that I don't think are in the MSYS or conda-forge package repos:
https://docs.panda3d.org/1.10/python/distribution/thirdparty...What about math games on open source operating systems, Steam?
A Manim renderer for [game engine] would be cool for school, and cool for STEM.
"Render and interact with through Blender, o3de, panda3d ManimCommunity/manim#3362" https://github.com/ManimCommunity/manim/issues/3362
GitHub Named a Leader in the Gartner First Magic Quadrant for AI Code Assistants
Gartner "Magic Quadrant for AI Code Assistants" (2024) https://www.gartner.com/doc/reprints?id=1-2IKO4MPE&ct=240819...
Additional criteria for assessing AI code assistants from https://news.ycombinator.com/item?id=40478539 re: Text-to-SQL bemchmarks :
codefuse-ai/Awesome-Code-LLM > Analysis of AI-Generated Code, Benchmarks: https://github.com/codefuse-ai/Awesome-Code-LLM :
> 8.2. Benchmarks: * Integrated Benchmarks, Program Synthesis, Visually Grounded Program Synthesis, Code Reasoning and QA, Text-to-SQL, Code Translation, Program Repair, Code Summarization, Defect/Vulnerability Detection, Code Retrieval, Type Inference, Commit Message Generation, Repo-Level Coding*
OT did not assess:
Aider: https://github.com/paul-gauthier/aider :
> Aider works best with GPT-4o & Claude 3.5 Sonnet and can connect to almost any LLM.
> Aider has one of the top scores on SWE Bench. SWE Bench is a challenging software engineering benchmark where aider solved real GitHub issues from popular open source projects like django, scikitlearn, matplotlib, etc
SWE Bench benchmark: https://www.swebench.com/
A carbon-nanotube-based tensor processing unit
"A carbon-nanotube-based tensor processing unit" (2024) https://www.nature.com/articles/s41928-024-01211-2 :
> Abstract: The growth of data-intensive computing tasks requires processing units with higher performance and energy efficiency, but these requirements are increasingly difficult to achieve with conventional semiconductor technology. One potential solution is to combine developments in devices with innovations in system architecture. Here we report a tensor processing unit (TPU) that is based on 3,000 carbon nanotube field-effect transistors and can perform energy-efficient convolution operations and matrix multiplication. The TPU is constructed with a systolic array architecture that allows parallel 2 bit integer multiply–accumulate operations. A five-layer convolutional neural network based on the TPU can perform MNIST image recognition with an accuracy of up to 88% for a power consumption of 295 µW. We use an optimized nanotube fabrication process that offers a semiconductor purity of 99.9999% and ultraclean surfaces, leading to transistors with high on-current densities and uniformity. Using system-level simulations, we estimate that an 8 bit TPU made with nanotube transistors at a 180 nm technology node could reach a main frequency of 850 MHz and an energy efficiency of 1 tera-operations per second per watt.
1 TOPS/W/s
"Ask HN: Can CPUs etc. be made from just graphene and/or other carbon forms?" (2024) https://news.ycombinator.com/item?id=40719725
"Ask HN: How much would it cost to build a RISC CPU out of carbon?" (2024) https://news.ycombinator.com/item?id=41153490
Or CNT Carbon Nanotubes, or TWCNT Twisted Carbon Nanotubes.
From "Rivian reduced electrical wiring by 1.6 miles and 44 pounds" (2024) https://news.ycombinator.com/item?id=41210021 :
> Are there yet CNT or TWCNT Twisted Carbon Nanotube substitutes for copper wiring?
"Twisted carbon nanotubes store more energy than lithium-ion batteries" (2024? https://news.ycombinator.com/item?id=41159421
- NewsArticle about the ScholarlyArticle: "The first tensor processor chip based on carbon nanotubes could lead to energy-efficient AI processing" (2024) https://techxplore.com/news/2024-08-tensor-processor-chip-ba...
How to build a 50k ton forging press
> By the early 2000s, parts from the heavy presses were in every U.S. military aircraft in service, and every airplane built by Airbus and Boeing.
>The savings on a heavy bomber was estimated to be even greater, around 5-10% of its total cost; savings on the B-52 alone were estimated to be greater than the entire cost of the Heavy Press Program.
These are wild stats.
Great article! I was fascinated to learn about the Heavy Press program for the first time, here on HN[1] a month ago, and am glad more about it is being posted.
It makes me think: what other processes could redefine an industry or way of thinking/designing if taken a step further? We had forging and extrusion presses … but huge, high pressure ones changed the game entirely.
> It makes me think: what other processes could redefine an industry or way of thinking/designing if taken a step further
Pressure-injection molded hemp plastic certainly meets spec for automotive and aerospace applications.
"Plant-based epoxy enables recyclable carbon fiber" (2022) [that's stronger than steel and lighter than fiberglass] https://news.ycombinator.com/item?id=30138954 ... https://news.ycombinator.com/item?id=37560244
Silica aerogels are dermally abrasive. Applications for non-silica aerogels - for example hemp aerogels - include thermal insulation, packaging, maybe upholstery fill.
There's a new method to remove oxygen from Titanium: "Cheap yet ultrapure titanium metal might enable widespread use in industry" (2024) https://news.ycombinator.com/item?id=40768549
"Electric recycling of Portland cement at scale" (2024) https://www.nature.com/articles/s41586-024-07338-8 ... "Combined cement and steel recycling could cut CO2 emissions" https://news.ycombinator.com/item?id=40452946
"Researchers create green steel from toxic [aluminum production waste] red mud in 10 minutes" (2024) https://newatlas.com/materials/toxic-baulxite-residue-alumin...
There are many new imaging methods for quality inspection of steel and other metals and alloys, and biocomposites.
"Seeding steel frames brings destroyed coral reefs back to life" (2024) https://news.ycombinator.com/item?id=39735205
[deleted]
Seven basic rules for causal inference
At the bottom, the author mentions that by "correlation" they don't mean "linear correlation", but all their diagrams show the presence or absence of a clear linear correlation, and code examples use linear functions of random variables.
They offhandedly say that "correlation" means "association" or "mutual information", so why not just do the whole post in terms of mutual information? I think the main issue with that is just that some of these points become tautologies -- e.g. the first point, "independent variables have zero mutual information" ends up being just one implication of the definition of mutual information.
This isnt a correction to your post, but a clarification for other readers: correlation implies dependence, but dependence does not imply correlation. Conversely, two variables share non-zero mutual information if and only if they are dependent.
By that measure, all of these Spurious Correlations indicate insignificant dependence, which isn't of utility: https://www.tylervigen.com/spurious-correlations
Isn't it possible to contrive an example where a test of pairwise dependence causes the statistician to error by excluding relevant variables from tests of more complex relations?
Trying to remember which of these factor both P(A|B) and P(B|A) into the test
I think you're using the word "insignificant" in a possibly misleading or confusing way.
I think in this context, the issue with the spurious correlations from that site is that they're all time series for overlapping periods. Of course, the people who collected these understood that time was an important causal factor in all these phenomena. In the graphical language of this post:
T --> X_i
T --> X_j
Since T is a common cause to both, we should expect to see a mutual information between X_i, X_j. In the paradigm here, we could try to control for T and see if a relationship persists (i.e. perhaps in the same month, collect observations for X_i, X_j in each of a large number of locales), and get a signal on whether some the shared dependence on time is the only link.
If a test of dependence shows no significant results, that's not conclusive because of complex, nonlinear, and quantum 'functions'.
How are effect lag and lead expressed in said notation for expressing causal charts?
Should we always assume that t is a monotonically-increasing series, or is it just how we typically sample observations? Can traditional causal inference describe time crystals?
What is the quantum logical statistical analog of mutual information?
Are there pathological cases where mutual information and quantum information will not discover a relationship?
Does Quantum Mutual Information account for Quantum Discord if it only uses von Neumann definition of entropy?
Launch HN: MinusX (YC S24) – AI assistant for data tools like Jupyter/Metabase
Hey HN! We're Vivek, Sreejith and Arpit, and we're building MinusX (https://minusx.ai), a data science assistant for Jupyter and Metabase. MinusX is a Chrome extension (https://minusx.ai/chrome-extension) that adds an AI sidechat to your analytics apps. Given an instruction, our agent operates your app (by clicking and typing, just like you would) to analyze data and answer queries. Broadly, you can do 3 types of things: ask for hypotheses and explore data, extend existing notebooks/dashboards, or select a region and ask questions. There's a simple video walkthrough here: https://www.youtube.com/watch?v=BbHPyX2lJGI. The core idea is to "upgrade" existing tools, where people already do most of their data work, rather than building a new platform.
I (Vivek) spent 6 years working in various parts of the data stack, from data analysis at a 1000+ person ride hailing company to research at comma.ai, where I also handled most of the metrics and dashboarding infrastructure. The problems with data, surprisingly, were pretty much the same. Developers and product managers just want answers, or want to set up a quick view of some metric they care about. They often don't know which table contains what information, or what specific secret filters need to be kept in mind to get clean data. At large companies, analysts/scientists take care of most of these requests over a thousand back-and-forths. In small companies, most data projects end up being one-off efforts, and many die midway.
I've tried every new shiny analytics app out there and none of them fully solve this core issue. New tools also come with a massive cost: you have to convince everyone around you to move, change all your workflows and hope the new tool has all features your trusty old one did. Most people currently go to ChatGPT with barely any real background context, and admonish the model till it sputters some useful code, SQL or hypothesis.This is the kind of user we're trying to help.
The philosophy of MinusX mirrors that of comma. Just as comma is working on "an AI upgrade for your car", we want to retrofit analytics software with abilities that LLMs have begun to unlock. We also get a kick out of the fact that we use the same APIs humans use (clicking and typing), so we don't really need "permission" from any analytics app (just like comma.ai does not need permission from Mr Toyota Corolla) :)
How it works: Given an instruction, the MinusX chrome extension first constructs a simplified representation of the host application's state using the DOM, and a bunch of application specific cues. We also have a set of (currently) predefined actions (eg: clicking and typing) that the agent can use to interact with the host application. Any "complex action" can be described as a combination of these action-primitives. We send this entire context, the instruction and the actions to an LLM. The LLM responds with a sequence of actions which are executed and the revised state is computed and sent back to the LLM. This loop terminates when the LLM evaluates that the desired goals are met. Our architecture allows users to extend the capabilities of the agent by specifying new actions as combinations of the action-primitives. We're working on enabling users to do this through the extension itself.
"Retrofitting" is a weird concept for software, and we've found that it takes a while for people to grasp what this actually implies. We think, with AI, it will be more of a thing. Most software we use will be "upgraded" and not always by the people making the original software.
We ourselves are focused on data analytics because we've worked in and around data science / data analysis / data engineering all our careers - working at startups, Google, Meta, etc - and understand it decently well. But since "retrofitting" can be just as useful for a bunch of other field-specific software, we're going to open-source the entire extension and associated plumbing in the near future.
Also, let's be real - a sequence of function calls rammed through a decision tree does not make any for-loop "agentic". The reality is that a large amount of in-the-loop data needed for tasks such as ours does not exist yet! Getting this data flywheel running is a very exciting axis as well.
The product is currently free to use. In the future, we'll probably charge a monthly subscription fee, and support local models / bring-your-own-keys. But we're still working that out.
We'd be super stoked for you to try out MinusX! You can find the extension here: https://minusx.ai/chrome-extension. We've also created a playground with data, for both Jupyter and Metabase, so once the extension is installed you can take it for a spin: https://minusx.ai/playground
We'd love to hear what you think about the idea, and anything else you'd like to share! Suggestions on which tools to support next are most welcome :)
XAI! Explainable AI: https://en.wikipedia.org/wiki/Explainable_artificial_intelli...
Use case: Evidence-based policy; impact: https://en.wikipedia.org/wiki/Evidence-based_policy
Test case: "Find leading economic indicators like bond yield curve from discoverable datasets, and cache retrieved data like or with pandas-datareader"
Use case: Teach Applied ML, NNs, XAI: Explainable AI, and first ethics
Tools with integration opportunities:
Google Model Explorer: https://github.com/google-ai-edge/model-explorer
Yellowbrick ML; teaches ML concepts with Visualizers for humans working with scikit-learn, which can be used to ensemble LLMs and other NNs because of its Estimator interfaces : https://www.scikit-yb.org/en/latest/
Manim, ManimML, Blender, panda3d, unreal: "Explain this in 3d, with an interactive game"
Khanmigo; "Explain this to me with exercises"
"And Calculate cost of computation, and Identify relatively sustainable lower-cost methods for these computations"
"Identify where this process, these tools, and experts picking algos, hyperparameters, and parameters has introduced biases into the analysis, given input from additional agents"
Uv 0.3 – Unified Python packaging
I love the idea of Rye/uv/PyBi on managing the interpreter, but I get queasy that they are not official Python builds. Probably no issue, but it only takes one subtle bug to ruin my day.
Plus the supply chain potential attack. I know official Python releases are as good as I can expect of a free open source project. While the third party Python distribution is probably being built in Nebraska.
I'm from Nebraska. Unfortunately if your Python is compiled in a datacenter in Iowa, it's more likely that it was powered with wind energy. Claim: Iowa has better Clean Energy PPAs for datacenters than Nebraska (mostly due to rational wind energy subsidies).
Anyways, software supply chain security and Python & package build signing and then containers and signing them too
Conda-forge's builds are probably faster than the official CPython builds. conda-forge/python-feedstock//recipe/meta.yml: https://github.com/conda-forge/python-feedstock/blob/main/re...
Conda-forge also has OpenBLAS, blos, accelerate, netlib, and Intel MKL; conda-forge docs > switching BLAS implementation: https://conda-forge.org/docs/maintainer/knowledge_base/#swit...
From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :
> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.
> We will show how to use this in practice with `rattler-build`
> For GPUs, conda-forge has supported different CUDA levels for a long time, and we'll look at how that is used as well.
> Lastly, we also take a look at PyPI. There are ongoing discussions on how to improve support for wheels with CUDA support. We are going to discuss how the (pre-)PEP works and synergy possibilities of rattler-build and cibuildwheel
Linux distros build and sign Python and python3-* packages with GPG keys or similar, and then the package manager optionally checks the per-repo keys for each downloaded package. Packages should include a manifest of files to be installed, with per-file checksums. Package manifests and/or the package containing the manifest should be signed (so that tools like debsums and rpm --verify can detect disk-resident executable, script, data asset, and configuration file changes)
virtualenvs can be mounted as a volume at build time with -v with some container image builders, or copied into a container image with the ADD or COPY instructions in a Containerfile. What is added to the virtualenv should have a signature and a version.
ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
> Fetch a container image and verify that the container image is signed according to the policy set in /etc/containers/policy.json (see containers-policy.json(5)).So, when you sign a container full of packages, you should check the package signatures; and verify that all package dependencies are identified by the SBOM tool you plan to use to keep dependencies upgraded when there are security upgrades.
e.g. Dependabot - if working - will regularly run and send a pull request when it detects that the version strings in e.g. a requirements.txt or environment.yml file are out of date and need to be changed because of reported security vulnerabilities in ossf/osv-schema format.
Is there already a way to, as a developer, sign Python packages built with cibuildwheel with Twine and TUF or sigstore to be https://SLSA.dev/ compliant?
Show HN: PgQueuer – Transform PostgreSQL into a Job Queue
PgQueuer is a minimalist, high-performance job queue library for Python, leveraging the robustness of PostgreSQL. Designed for simplicity and efficiency, PgQueuer uses PostgreSQL's LISTEN/NOTIFY to manage job queues effortlessly.
Does the celery SQLAlchemy broker support PostgreSQL's LISTEN/NOTIFY features?
Similar support in SQLite would simplify testing applications built with celery.
How to add table event messages to SQLite so that the SQLite broker has the same features as AMQP? Could a vtable facade send messages on tablet events?
Are there sqlite Triggers?
Celery > Backends and Brokers: https://docs.celeryq.dev/en/stable/getting-started/backends-...
/? sqlalchemy listen notify: https://www.google.com/search?q=sqlalchemy+listen+notify :
asyncpg.Connection.add_listener
sqlalchemy.event.listen, @listen_for
psychopg2 conn.poll(), while connection.notifies
psychopg2 > docs > advanced > Advanced notifications: https://www.psycopg.org/docs/advanced.html#asynchronous-noti...
PgQueuer.db, PgQueuer.listeners.add_listener; asyncpg add_listener: https://github.com/janbjorge/PgQueuer/blob/main/src/PgQueuer...
asyncpg/tests/test_listeners.py: https://github.com/MagicStack/asyncpg/blob/master/tests/test...
/? sqlite LISTEN NOTIFY: https://www.google.com/search?q=sqlite+listen+notify
sqlite3 update_hook: https://www.sqlite.org/c3ref/update_hook.html
Can Large Language Models Understand Symbolic Graphics Programs?
What an awful paper title, saying "Symbolic Graphics Programs" when they just mean "vector graphics". I don't understand why they can not just use the established term instead. Also, there is no "program" here, in the same way that coding HTML is not programming, as vector graphics are not supposed to be Turing complete. And where they pulled the "symbolic" from is completely beyond me.
I'm more curious how they think LLM's can imagine things:
> To understand symbolic programs, LLMs may need to possess the ability to imagine how the corresponding graphics content would look without directly accessing the rendered visual content
To my understanding, LLMs are predictive engines based upon their tokens and embeddings without any ability to "imagine" things.
As such, an LLM might be able to tell you that the following SVG is a black circle because it is in Mozilla documentation[0]:
<svg viewBox="0 0 100 100" xmlns="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg">
<circle cx="50" cy="50" r="50" />
</svg>
However, I highly doubt any LLM could tell you the following is a "Hidden Mickey" or "Mickey Mouse Head Silhouette": <svg viewBox="0 0 175 175" xmlns="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg" target="_blank" rel="nofollow noopener">http://www.w3.org/2000/svg">
<circle cx="100" cy="100" r="50" />
<circle cx="50" cy="50" r="40" />
<circle cx="150" cy="50" r="40" />
</svg>
- [0] https://developer.mozilla.org/en-US/docs/Web/SVG/Element/cir...If teh LLM saves the SVG vector graphic to a raster image like a PNG and prompts with that instead, it will have no trouble labeling what's depicted in the SVG.
So, the task is "describe what an SVG depicts without saving it to a raster image and prompting with that"?
I believe this is exactly what the modern multimodal models are doing -- they are not strictly Large Language Models any more.
Which part is the query preprocessor?
/? LLM stack app architecture https://www.google.com/search?q=LLM+stack+app+architecture&t...
https://cobusgreyling.medium.com/emerging-large-language-mod... ; Flexibility / Complexity,
Using a list to manage executive function
todo.txt is a lightweight text format that a number of apps support: http://todotxt.org/ . From http://todotxt.org/todo.txt :
(priority) task text +project @context
(A) Call Mom @Phone +Family
(A) Schedule annual checkup +Health
(B) Outline chapter 5 +Novel @Computer
(C) Add cover sheets @Office +TPSReports
x Download Todo.txt mobile app @Phone
From "What does my engineering manager do all day?" (2021) https://news.ycombinator.com/item?id=28680961 :> - [ ] Create a workflow document with URLs and Text Templates
> - [ ] Create a daily running document with my 3 questions and headings and indented markdown checkbox lists; possibly also with todotxt/todo.txt / TaskWarrior & BugWarrior -style lineitem markup.
3 questions: Since, Before, Obstacles: What have I done since the last time we met? What will I do before the next time we meet? What obstacles are blocking my progress?
## yyyy-mm-dd
### @teammembername
#### Since
#### Before
#### Obstacles
A TaskWarrior demo from which I wrote https://gist.github.com/westurner/dacddb317bb99cfd8b19a3407d... : $ task help
task; task list; # 0 tasks.
task add "Read the source due:today priority:H project:projectA"
task add Read the docs later due:tomorrow priority:M project:projectA
task add project:one task abc
# "Created task 3."
task add project:one task def +someday depends:3
task
task context define projectA "project:projectA or +urgent"
task context projectA
task add task ghi
task add task jkl +someday depends:3
task # lists just tasks for the current context:"project:projectA"
task next
task 1 done
task next
task 2 start
task 2 done
task context none
task delete
TaskWarrior Best Practices: https://taskwarrior.org/docs/best-practices/The TaskWarrior +WAITING virtual label is the new way instead of status:waiting according to the changelog.
Every once in awhile, I remember that I have a wiki/workflow document with a list of labels for systems like these: https://westurner.github.io/wiki/workflow#labels :
GTD says have labels like -next, -waiting, and -someday.
GTD also incorporates the 4 D's of Time Management; the Eisenhower Matrix Method.
4 D's of Time Management: Do, Defer/Schedule, Delegate, Delete
Time management > The Eisenhower Method: https://en.wikipedia.org/wiki/Time_management#The_Eisenhower...
Hipster PDA: https://en.wikipedia.org/wiki/Hipster_PDA
"Use a work journal" (2024-07; 1083 pts) https://news.ycombinator.com/item?id=40950584
U.S. Inflation Rate 1960-2024
Keep in mind the things tracked in the Consumer Price Index are not static. That is, over time, there is a drift towards comparing apples to oranges. For example, if beef becomes too expensive so consumers switch to pork eventually then beef is removed and pork is tracked. That's not less inflation. That's bait and switch.
Futhermore, year by year inflation is too out of context. Instead for a given year we should also get the compounded rate for the previous 5 yrs, 10 yrs, and 15 yrs. That level of transparency would be significant.
CPI: Consumer Price Index; "inflation": https://en.wikipedia.org/wiki/Consumer_price_index
US BLS: Bureau of Labor Statistics > Consumer Price Indexes Overview: https://www.bls.gov/cpi/overview.htm :
> Price indexes are available for the U.S., the four Census regions, nine Census divisions, two size of city classes, eight cross-classifications of regions and size-classes, and for 23 local areas. Indexes are available for major groups of consumer expenditures (food and beverages, housing, apparel, transportation, medical care, recreation, education and communications, and other goods and services), for items within each group, and for special categories, such as services.
- FWIU food prices never decreased after COVID-19.
https://www.google.com/search?q=FWIU+food+prices+never+decre.... :
"Consumer Price Index for All Urban Consumers: Food and Beverages in U.S. City Average (CPIFABSL)" https://fred.stlouisfed.org/series/CPIFABSL
"Consumer Price Index for All Urban Consumers: All Items Less Food and Energy in U.S. City Average (CPILFESL)" https://fred.stlouisfed.org/series/CPILFESL
> The "Consumer Price Index for All Urban Consumers: All Items Less Food & Energy" is an aggregate of prices paid by urban consumers for a typical basket of goods, excluding food and energy. This measurement, known as "Core CPI," is widely used by economists because food and energy have very volatile prices.
- COVID eviction moratoriums ended in 2021. How did that affect CPI rent/lease/mortgage, and new housing starts now that lumber prices have returned to normal? https://en.wikipedia.org/wiki/COVID-19_eviction_moratoriums_...
"Consumer price index (CPI) for rent of primary residence compared to CPI for all items in the United States from 2000 to 2023" https://www.statista.com/statistics/1440254/cpi-rent-primary...
- pandas-datareader > FRED, caching queries: https://pandas-datareader.readthedocs.io/en/latest/remote_da...
> This measurement, known as "Core CPI," is widely used by economists because food and energy have very volatile prices.
Translation: The Core Consumer Price Index has little to do with actual citizens, and instead reflects some theoretical group of people who don't eat and don't go anywhere.
In addition, politicians recite the CCPI and The Media parrot them. Both knowing - or should know - it's deceptive and doesn't actually reflect reality.
What could go wrong?
What metrics do you suggest for the purpose?
Is tone helpful?
It's not the metrics per se, it's the disconnect between what they actually represent and how they're used to misrepresent the current economic feeliings of the masses.
Take the last few months in the USA, we've been assured by the party in power that inflation is under control, etc. Yet, *no one* who goes to the super market on weekly basis has seen that and believes that. Eventually, such gaslighting leads to mistrust and pushback.
The smart thing would be to include anything practical consumers MUST purchase. To exclude food and gas might be great for academics, but for the everyone else is smells of BS.
How should the free hand of the market affect post-COVID global food prices?
Tariff spats (and firms' resultant inability to compete at global price points) or free market trade globalization with nobody else deciding the price for the contracting [fair trade] parties.
There are multiple lines that can be plotted on a chart: Core CPI, CPI Food &|| Gas, GDP, yield curve, M1 and M2, R&D focii and industry efficiency metrics
You're off topic. The discussion is about CCPI and how it's misleading and abused. That is, how it's used as a propaganda tool.
I'm not trying to explain post Covid inflation, the free market, etc.
How should they get food price inflation under control?
What other metrics for economic health do you suggest?
Well, you certainly don't get it under control by removing it from the key metric. You don't get it under control by pretending it hasn't been removed and then gaslighting the public by saying everything is ok, that is under control.
What can they do here?
"Harris’ plan to stop [food] price gouging could create more problems than it solves" (2024) https://www.cnn.com/2024/08/16/business/harris-price-gouging... :
> Food prices have surged by more than 20% under the Biden-Harris administration, leaving many voters eager to stretch their dollars further at the grocery store.
> On Friday, Vice President Kamala Harris said she has a solution: a federal ban on price gouging across the food industry.
They still haven't undone the prior administration's tariff wars of recent yore fwiu. What happened to everyone loves globalization and Free Trade and Fair Trade? Are you down with TPP?
But did prices reflect the value of the workers and their product in the first place.
There were compost shortages during COVID.
There are still fertilizer shortages FWIU. TIL about not Ed Koch trying to buy Iowa's basically state-sponsored fertilizer business which was created in direct response to the fertilizer market conditions created in significant part by said parties.
Humanoid robots in agriculture, with agrivoltaics, and sustainable no-till farming practices; how much boost in efficiency can we all watch and hope for?
What Is a Knowledge Graph?
Good article on the high level concepts of a knowledge graph, but some concerning mischaracterizations of core functions of ontologies supporting the class schema and continued disparaging of competing standards-based (RDF triple-store) solutions. That the author omits the updates for property annotations using RDF* is probably not an accident and glosses over the issues with their proprietary clunky query language.
While knowledge graphs are useful in many ways, personally I wouldn't use Neo4J to build a knowledge graph as it doesn't really play to any of their strengths.
Also, I would rather stab myself with a fork than try to use Cypher to query a concept graph when better standards-based options are available.
I enjoy cypher, it's like you draw ASCII art to describe the path you want to match on and it gives you what you want. I was under the impression that with things like openCypher that cypher was becoming (if not was already) the main standard for interacting with a graph database (but I could be out of date). What are the better standards-based options you're referring to?
W3C SPARQL, SPARUL is now SPARQL Update 1.1, SPARQL-star, GQL
GraphQL is a JSON HTTP API schema (2015): https://en.wikipedia.org/wiki/GraphQL
GQL (2024): https://en.wikipedia.org/wiki/Graph_Query_Language
W3C RDF-star and SPARQL-star (2023 editors' draft): https://w3c.github.io/rdf-star/cg-spec/editors_draft.html
SPARQL/Update implementations: https://en.wikipedia.org/wiki/SPARUL#SPARQL/Update_implement...
/? graphql sparql [ cypher gremlin ] site:github.com inurl:awesome https://www.google.com/search?q=graphql+sparql++site%253Agit...
But then data validation everywhere; so for language-portable JSON-LD RDF validation there are many implementations of JSON Schema for fixed-shape JSON-LD messages, there's W3C SHACL Shapes and Constraints Language, and json-ld-schema is (JSON Schema + SHACL)
/? hnlog SHACL, inference, reasoning; https://news.ycombinator.com/item?id=38526588 https://westurner.github.io/hnlog/#comment-38526588
> free copy of the O’Reilly book "Building Knowledge Graphs: A Practitioner’s Guide"
Knowledge Graph (disambiguation) https://en.wikipedia.org/wiki/Knowledge_Graph_(disambiguatio...
Knowledge graph: https://en.wikipedia.org/wiki/Knowledge_graph :
> In knowledge representation and reasoning, a knowledge graph is a knowledge base that uses a graph-structured data model or topology to represent and operate on data. Knowledge graphs are often used to store interlinked descriptions of entities – objects, events, situations or abstract concepts – while also encoding the free-form semantics or relationships underlying these entities. [1][2]
> Since the development of the Semantic Web, knowledge graphs have often been associated with linked open data projects, focusing on the connections between concepts and entities. [3][4] They are also historically associated with and used by search engines such as Google, Bing, Yext and Yahoo; knowledge-engines and question-answering services such as WolframAlpha, Apple's Siri, and Amazon Alexa; and social networks
Ideally, a Knowledge Graph - starting with maybe a "personal knowledge base" in a text document format that can be rendered to HTML with templates - can be linked with other data about things with correlate-able names; ideally you can JOIN a knowledge graph with other graphs as you can if the Node and Edge Relations with Schema and URIs make it possible to JOIN.
A knowledge graph is a collection of nodes and edges (or nodes and edge nodes) with schema so that it is query-able and JOIN-able with.
A Named Graph URI may be the graphid ?g of an RDF statement in a quadstore:
?g ?s ?p ?o // ?o_datatype ?o_lang
MiniBox, ultra small busybox without uncommon options
Would this compile to WASM for use as a terminal in JupyterLite? Though, busybox has the ash shell instead of bash. https://github.com/jupyterlite/jupyterlite/issues/949
It could with a few tweaks, but it lacks features when compared to bash and busybox ash. For example, it doesn't support shell scripting, configuration file initialization, etc. I wouldn't recommended it right now though since it doesn't have most features of a standard shell but if you want to test it out, give it a try.
RustyBox is a c2rust port/rewrite of BusyBox to Rust; though it doesn't look like anyone has contributed fixes to the unsafe parts in awhile.
So, no SysV /etc/init.d because no shell scripting, and no systemd because embedded environment resource constraints? There must be a process to init and respawn processes on events like boot (and reboot, if there's a writeable filesystem)
Yes, you are right, I am trying to create a smaller version of sysvinit compatible with busybox's init. There is still no shell scripting yet, but I do promise to make the shell fully compatible with other shells with shell scripting in a future version.
What happened to RustyBox? c2rust is way easier than doing it manually, then why is it abandoned?
Same question about rustybox. Maybe they're helping with cosmos-de or coreutils or something.
OpenWRT has procd instead of systemd, but it does source a library of shell functions and run [somewhat haphazard] /etc/init.d scripts instead of parsing systemd unit configuration files to eliminate shell scripting errors when spawning processes as root.
https://westurner.github.io/hnlog/ Ctrl-F procd, busd
(This re: bootloaders, multiple images, and boot integrity verification: https://news.ycombinator.com/item?id=41022352 )
Systemd is great for many applications.
Alternatives to systemd: https://without-systemd.org/wiki/index_php/Alternatives_to_s...
There are advantages to systemd unit files instead of scripts. Porting packages between distros is less work with unit files. Systemd respawns processes consistently and with standard retry/backoff functionality. Systemd+journals produces indexable logs with consistent datetimes.
There's a pypi:SystemdUnitParser.
docker-systemctl-replacement > systemctl3.py parses and schedules processes defined in systemd unit files: https://github.com/gdraheim/docker-systemctl-replacement/blo...
Architectural Retrospectives: The Key to Getting Better at Architecting
Architecture retrospectives sound like a good idea but don't seem to be "the key" because in practice they fail too often.
Retrospectives look backwards, and most teams aren't wired that way. And unlike sprint retrospectives, architecture retrospectives turn up issues that are unrealistic/impossible to adjust in real time.
Instead, try lightweight architectural decision records, during the selection process, at the start, when things are easiest to change, and can be the most flexible.
When teams try ADRs in practice, teamwork goes up, choices get better, and implementations get more realistic. And if someone later on wants to do a retrospective, then the key predictive information is right there in the ADR.
https://github.com/joelparkerhenderson/architecture-decision...
Is there already a good way to link an ADR Architectural Decision Record with Threat Modeling primitives and considerations?
"Because component_a doesn't support OAuth", "Because component_b doesn't supported signed cookies"
Threat Model: https://en.wikipedia.org/wiki/Threat_model
GH topic: threat-modeling: https://github.com/topics/threat-modeling
Real and hypothetical architectural security issues can be linked to CWE Common Weakness Enumeration URLs.
SBOM tools help to inventory components and versions in an existing architecture and to JOIN with vuln databases that publish in OSV OpenSSF Vulnerability Format, which is useful for CloudFuzz, too.
Good question. Yes there are a variety of ways that help.
1. If the team favors lightweight markdown, then add a markdown section for Threat Modeling and markdown links to references. Some teams favor doing this for more kinds of analysis, such as business analysis (e.g. SWOT) and environment analysis (e.g. PESTLE) and risk analysis (e.g. RAID).
2. If the team favors metrics systems, then consider trying a Jupyter notebook. I haven't personally tried this. Teams anecdotally tell me these can be excellent for showing probable effects.
3. If the team is required to use existing tooling such as tracking systems, such as for auditing and compliance, then consider writing the ADR within the tracking system and linking to it internally.
> 1.
Add clickable URL links to the reference material for whichever types of analyses.
> 2. Jupyter
Notebooks often omit test assertions that would be expected of a Python module with an associated tests/ directory.
With e.g. ipytest, you can run specific unit tests in notebook input cells (instead of testing the whole notebook).
There are various ways to template and/or parametrize notebooks; copy the template.ipynb and include the date/time in the filename__2024-01-01.ipynb, copy a template git repo and modify, jupyterlab_templates, papermill
> 3. [...] consider writing the ADR within the tracking system and linking to it internally
+1. A "meta issue" or an epic includes multiple other issues.
If you reference an issue as a list item in GFM GitHub-Flavored Markdown, GitHub will auto-embed the current issue title and closed/open status:
- https://github.com/python/cpython/issues/1
- python/cpython#1
This without the leading `- ` doesn't get the card embed though: https://github.com/python/cpython/issues/1
Whereas this
form with hand-copied issue titles, non-obfuscated URLs you don't need to hover over to read, and - [ ] markdown checkboxes works with all issue trackers: - [ ] "Issue title" https://github.com/python/cpython/issues/1
- [x] "Issue title" https://github.com/python/cpython/issues/2
ARPA-H announces awards to develop novel technologies for precise tumor removal
This is the sort of thing I'd love to see NIH have many parallel research tracks on this stuff, given backing & support to make the research happen & to release the work.
Precision medicine has so much potential, feels so in line for some really wild breakthroughs, with its ability to make so many exact & small operations.
I wonder where the Obama Precision Medicine (2015) work has gotten to so far. https://obamawhitehouse.archives.gov/the-press-office/2015/0...
The biggest Obama spend was to create a research cohort (https://allofus.nih.gov/). So far, it hasn't paid dividends in an appreciable way, but the UK Biobank (on which the US program is partially modeled) started in 2006 and is now contributing immensely to the development of medicine.
The US program has the potential to be even more valuable if managed well, but I haven't seen overwhelming indications of reaching that potential yet; however, I think a few more years are needed for a clear evaluation.
Sadly the All of Us program (of which I am a research subject [and researcher]) hasn't done any sort of imaging or ECG. They did draw blood, so that allowed for genome sequencing. That may also, in principle, allow for assaying new biomarkers (I don't believe anything like that has been funded though).
2009-2010 Meaningful Use criteria required physicians to implement electronic care records.
There's now FHIR for sharing records between different vendors' EHR systems. There's a JSONLD representation of FHIR.
Which health and exercise apps can generate FHIR for self-reporting?
Can participants forward their other EHR/EMR records to the All of Us program?
Can participants or Red Cross or other blood donation services forward collected vitals and sample screening data to the All of Us program?
The imaging reports are being imported, but not the images.
The bigger issue is that clinical imaging is done due to medical indications. Medical indications dramatically confound the results. A key element of the UK Biobank's value is that imaging and ECG are being done without any clinical indication.
So they need more data from healthy patients when they're healthy in order to determine what's anomalous?
SIEM tools do anomaly detection with lots of textual log data and sensor data.
What would a Cost-effectiveness analysis say about collecting data from healthy patients: https://en.wikipedia.org/wiki/Cost-effectiveness_analysis
TIL the ROC curve applies to medical tests.
Photon Entanglement Drives Brain Function
Traditional belief: photons do not interact with photons, photons are massless according to the mass energy relation.
New findings: Photons interact as phonons in matter.
"Quantum entangled photons react to Earth's spin" (2024) https://news.ycombinator.com/item?id=40720147 :
> Actually, photons do interact with photons; as phonons in matter: "Quantum vortices of strongly interacting photons" (2024) https://www.science.org/doi/10.1126/science.adh5315 https://news.ycombinator.com/item?id=40600762
"New theory links quantum geometry to electron-phonon coupling" (2024) https://news.ycombinator.com/item?id=40663966 https://phys.org/news/2024-06-theory-links-quantum-geometry-... :
> A new study published in Nature Physics introduces a theory of electron-phonon coupling that is affected by the quantum geometry of the electronic wavefunctions
The field of nonlinear optics deals with photon-photon interactions in matter, and has been around for almost a century.
Do they model photons as rays, vectors, particles, waves, or fluids?
https://en.wikipedia.org/wiki/Nonlinear_optics :
> In nonlinear optics, the superposition principle no longer holds.[1][2][3]
But phonons are quantum waves in or through matter and the superposition principle holds with phonons AFAIU
Superposition is valid in vacuum, well, actually, until the photons have enough energy to colide and form an electron-positron pair.
It's also valid in most transparent material, again, assuming
1) each photon has no enough energy for example to extract an electron form the material like in the photoelectric effect, or creating an electron and a hole in a semiconductor, or ...
2) there are not enough photons, so you can model the effect using linear equations
And there are weird materials where the non linear effects are easy to trigger.
The conclusion is that superposition is only a nice approximation in the easy case.
I had always assumed that activity in the brain was unsynchronized and that it is that which produces the necessary randomness through race conditions. I am considering the need to find synchronization as making some sort of computer analogy which doesn't exist.
Brain activity is highly synchronized, at least according to the definition used by neuroscientists. Brain waves operate at certain hertz based on your level of arousal. https://en.wikipedia.org/wiki/Neural_oscillation
Brain waves also synchronize to other brain waves; "interbrain synchrony"
- "The Social Benefits of Getting Our Brains in Sync" (2024) https://www.quantamagazine.org/the-social-benefits-of-gettin...
But that might be just like the sea has waves?
The brain waves drive the synchronization process by triggering inhibition in neurons not in the targeted subgroup.
There are different frequency waves that operate over different timescales and distances and are typically involved in different types of functional activity.
Mermaid: Diagramming and Charting Tool
A key thing to appreciate is that both Github and Gitlab support rendering Mermaid graphs in their ReadMe's
[0] https://docs.gitlab.com/ee/user/markdown.html
[1] https://github.blog/developer-skills/github/include-diagrams...
JupyterLab supports MermaidJS: https://github.com/jupyterlab/jupyterlab/pull/14102
Looks like Colab almost does.
The official vscode MermaidJS extension probably could work in https://vscode.dev/ : https://marketplace.visualstudio.com/items?itemName=MermaidC... :
ext install MermaidChart.vscode-mermaid-chart
LLMs can generate MermaidJS in Markdown diagrams that can easily fixed given sufficient review.Additional diagramming (node graph) tools: GraphViz, Visio, GSlides, Jamboard, yEd by yworks layout algorithms, Gephi, Dia, PlantUML, ArgoUML, blockdiag, seqdiag, rackdiag, https://markmap.js.org/
JPlag – Detecting Software Plagiarism
Should a plagiarism score be considered when generating code with an infinite monkeys algorithm with selection or better?
Would that result in inability to write code in a clean room, even; because eventually all possible code strings and mutations thereof would already be patented.
For example, are three notes or chords copyrightable?
Resilient Propagation and Free-Space Skyrmions in Toroidal EM Pulses
"Observation of Resilient Propagation and Free-Space Skyrmions in Toroidal Electromagnetic Pulses" (2024) https://pubs.aip.org/apr/article/11/3/031411/3306444/Observa...
- "Electromagnetic vortex cannon could enhance communication systems" (2024) https://phys.org/news/2024-08-electromagnetic-vortex-cannon-...
Elroy Jetson, Buzz Lightyear, and Ghostbusters all project rings / vortexes.
Are toroidal pulses any more stable for fusion, or thrust and handling?
"Viewing Fast Vortex Motion in a Superconductor" (2024) https://physics.aps.org/articles/v17/117
New research on why Cahokia Mounds civilization left
I used to live near the Missouri River before it meets the Mississippi River south of STL, but haven't yet made it to the Cahokia Mounds which are northeast and across the river from what is now St. Louis, Missouri.
Was it disease?
["Fusang" to the Chinese, various names to Islanders FWIU]
[?? BC/AD: Egyptian treasure in Illinois, somehow without paddleboats to steam up the Mississippi]
~800 AD: Lead Cross of Knights Templar in Arizona, according to America Unearthed S01E10. https://www.google.com/search?q=%7E800+AD%3A+Templar+Cross%2... ; a more recent dating of Tucson artifacts: https://en.wikipedia.org/wiki/Tucson_artifacts
~1000 AD: Leif Erickson, L'Anse aux Meadows; Discovering Vinland: https://en.m.wikipedia.org/wiki/Leif_Erikson#Discovering_Vin...
And then the Story of Erik the Red, and a Skraeling girl in Europe, and Columbus; and instead we'll celebrate Juneteenth day to celebrate when news reached Galveston.
Did they plant those mounds? Did they all bring good soil or dirt to add to the mound?
May Pole traditions may be similar to "all circle around the mountain" practices in at least ancient Egyptian culture FWIU.
If there was a lot of contact there, would that have spread diseases? (Various traditions have intentionally high contact with hol y water containers on the way in, too, for example.)
FWIU there's strong evidence for Mayans and Aztecs in North America; but who were they displacing?
For one thing, the words you want are "Maya" and "Nahua". "Mayan" is the language family and "Aztec" refers to various things, none of which are what you want.
They're also definitely unrelated to the mound cultures except for the broadest possible relationships like existing on the same continent.
How are pyramid-building cultures definitely unrelated to mound-building cultures?
Cahokia Mounds: https://en.wikipedia.org/wiki/Cahokia :
> Today, the Cahokia Mounds are considered to be the largest and most complex archaeological site north of the great pre-Columbian cities in Mexico.
Chicago was a trading post further north FWIU, but not an archaeological site.
"Michoacan, Michigan, Mishigami, Mizugami: Etymological Origins? A Legend." https://christopherbrianoconnor.medium.com/michoacan-michiga...
Is there evidence of hydrological engineering or stonework?
It's not clear whether the megalithic Sage Wall in MT was man-made, and sort of looks like the northern glacier pass it may have marked.
FWIU there are quarry sites in the southwest that predate most timelines of stonework in the Americas and in Egypt, Sri Lanka / Indonesia, and East Asia; but they're not further north than Cahokia Mounds.
In TN, There are many Clovis sites; but they decided to flood the valley that was home to Sequoyah - who gave written form to Cherokee and other native languages - and also a 9500-year old archaeological site.
This says the Clovis people of Clovis, New Mexico are the oldest around: https://tennesseeencyclopedia.net/entries/paleoindians-in-te...
The Olmecs, Aztecs, and Mayans all worked stone.
From where did stonework like the Osireon originate?
> How are pyramid-building cultures definitely unrelated to mound-building cultures?
You're asserting a causal relationship, not a functional or morphological similarity. You haven't made an argument for that yet.
> This says the Clovis people of Clovis, New Mexico are the oldest around
They aren't. Pre-clovis is well established at this point. I'm not sure you intend this with your phrasing, but maybe this will be useful. Archaeological cultures like Clovis don't name a "people" because the actual humans or may not have shared unified identities. Type sites also just represent a clear example of the broader category they're naming rather than anything about where things originated.
There's also thousands and thousands of years separation between Clovis Mexica, Mound cultures, or the Maya, so unless you make an argument I'm not sure what you're trying to say here. Do you think all lithics are the same?
> The Olmecs, Aztecs, and Mayans all worked stone.
1) you've made the same naming mistake again and 2) this isn't an argument for anything.
Queues invert control flow but require flow control
I always wish more metaphors were built using conveyor belts in these discussions. It helps me mentally underscore that you have to pay attention to what queue/belt you load and why you need to give that a lot of thought.
Granted, I'm probably mostly afraid of diving into factorio again. :D
The Spintronics mechanical circuits game is sort of like conveyor belts.
Electrons are not individually identified like things on a conveyor belt.
Electrons in conductors, semiconductors, and superconductors do behave like fluids.
Turing tapes; https://hackaday.com/2016/08/18/the-turing-tapes/
Theory of computation > Models of computation: https://en.wikipedia.org/wiki/Theory_of_computation
Reservoir of liquid water found deep in Martian rocks
Article: "Liquid water in the Martian mid-crust" (2024) https://www.pnas.org/doi/10.1073/pnas.2409983121
- "Mars may host oceans’ worth of water deep underground" [according to an analysis of seismic data] https://www.planetary.org/articles/mars-may-host-oceans-wort... :
> Now, a team of scientists has used Marsquakes — measured by NASA’s InSight lander years ago — to see what lies beneath. Since the way a Marsquake travels depends on the rock it’s passing through, the researchers could back out what Mars’ crust looks like from seismic measurements. They found that the mid-crust, about 10-20 kilometers (6-12 miles) down, may be riddled with cracks and pores filled with water. A rough estimate predicts these cracks could hold enough water to cover all of Mars with an ocean 1-2 kilometers (0.6-1.2 miles) deep
> [...] This reservoir could have percolated down through nooks and crannies billions of years ago, only stopping at huge depths where the pressure would seal off any cracks. The same process happens on our planet — but unlike Mars, Earth’s plate tectonics cycles this water back up to the surface
> [...] “It would be very challenging,” Wright said. Only a few projects have ever bored so deep into Earth’s crust, and each one was an intensive undertaking. Replicating that effort on another planet would take lots of infrastructure, Wright goes on, and lots of water.
How much water does drilling take on Earth?
Water is used to displace debris and to carry it up to the surface.
A cylinder of 30cm diameter and 10km deep would hold around 700k litres.
1 US gal = 3.785 litres
700k litres = 0.184920437 million US gallons
"How much water does the typical hydraulically fractured well require?" https://www.usgs.gov/faqs/how-much-water-does-typical-hydrau... :
> Water use per well can be anywhere from about 1.5 million gallons to about 16 million gallons
https://www.epa.gov/watersense/statistics-and-facts :
> Each American uses an average of 82 gallons of water a day at home (USGS, Estimated Use of Water in the United States in 2015).
So, 1m gal / 82 gal/person/day = 12,195 person/days of water.
Camping guidelines suggests 1-2 gallons of water per person per day.
1m gal / 2 gal/person/day = 500,000 person/days of water
The 2016 Mars NatGeo series predicts dynamics of water scarcity on Mars.
Water on Mars: https://en.wikipedia.org/wiki/Water_on_Mars
Show HN: R.py, a small subset of Python Requests
i revisited the work that i started in 2022 re: writing a subset of Requests but using the standard library: https://github.com/gabrielsroka/r
back then, i tried using urllib.request (from the standard library, not urllib3) but it lacks what Requests/urllib3 has -- connection pools and keep alive [or that's where i thought the magic was] -- so my code ran much slower.
it turns out that urllib.request uses http.client, but it closes the connection. so by using http.client directly, i can keep the connection open [that's where the real magic is]. now my code runs as fast as Requests/urllib3, but in 5 lines of code instead of 4,000-15,000+
moral of the story: RTFM over and over and over again.
"""Fetch users from the Okta API and paginate."""
import http.client
import json
import re
import urllib.parse
# Set these:
host = 'domain.okta.com'
token = 'xxx'
url = '/api/v1/users?' + urllib.parse.urlencode({'filter': 'profile.lastName eq "Doe"'})
headers = {'authorization': 'SSWS ' + token}
conn = http.client.HTTPSConnection(host)
while url:
conn.request('GET', url, headers=headers)
res = conn.getresponse()
for user in json.load(res):
print(user['id'])
links = [link for link in res.headers.get_all('link') if 'rel="next"' in link]
url = re.search('<https://[^/]+(.+)>', links[0]).group(1) if links else None
https://docs.python.org/3/library/urllib.request.htmlhttps://docs.python.org/3/library/http.client.html
IIRC somewhere on the Python mailing list archives there's an email about whether to add the HTTP redirect handling and then SSL support - code to urllib, urllib2, or create urllib or httplib.
How does performance compare to HTTPX, does it support HTTP 1.1 request pipelining, does it support HTTP/2 or HTTP/3?
Show HN: I built an animated 3D bookshelf for ebooks
The most cited authors in the Stanford Encyclopedia of Philosophy
Fascinating list that I thought yall would enjoy! If you’re not yet aware, https://plato.stanford.edu is as close to “philosophical canon” as it gets in modern American academia.
Shoutout to Gödel and Neumann taking top spots despite not really being philosophers, at least in how they’re remembered. Comparatively, I’m honestly shocked that neither Bohr nor Heisenberg made the cut, even though there’s multiple articles on quantum physics… Turing also managed to sneak in under the wire, with 33 citations.
The bias inherent in the source is discussed in detail, and I would also love to hear HN ideas on how to improve this project, and how to visualize the results! I’m not the author, but this is right up my alley to say the least, and I’d love to take a crack at it.
From "Show HN: WhatTheDuck – open-source, in-browser SQL on CSV files" https://news.ycombinator.com/item?id=39836220 :
> datasette-lite can load [remote] sqlite and Parquet but not yet DuckDB (?) with Pyodide in WASM, and there's also JupyterLite as a datasette plug-in: https://github.com/simonw/datasette-lite https://news.ycombinator.com/user?id=simonw https://news.ycombinator.com/from?site=simonwillison.net
JSON-LD with https://schema.org/Person records with wikipedia/dbpedia RDF URIs would make it easy to query on whichever datasets can be joined on common RDFS properties like schema: :identifier and rdfs:subPropertyOf sub-properties, https://schema.org/url, :sameAs,
Plato in RDF from dbpedia: https://dbpedia.org/page/Plato
Today there are wikipedia URLs, DOI URN URIs, shorturls in QR codes, ORCID specifically to search published :ScholarlyArticle by optional :author, and now there are W3C DIDs Decentralized Identifiers for signing, identifying, and searching of unique :Thing and skos:Concept that can be generated offline and optionally registered centrally, or centrally generated and assigned like DOIs but they're signing keys.
Given uncertainty about time intervals, plot concepts over time with charts for showing graph growth over time. Maybe philosophy skos:Concept intervals (and relations, links) from human annotations thereof and/or from LLM parsing and search snippets of Wikipedia, dbpedia RDF, wikidata RDF, and ranked Stanford Encyclopedia of Philosophy terminological occurrence frequency.
- "Datasette Enrichments: a new plugin framework for augmenting your data" (2023) by row with asyncio and optionally httpx: https://simonwillison.net/2023/Dec/1/datasette-enrichments/
Ask HN: How to Price a Product
I'm 11 years experienced software engineer. I have been showing interests in frontend design and development and my day job is around it.
I built a working concept that converts designs to interactive code. However, the tech is not usable at this stage, no documentation and looks bad, but works as expected.
I'm aware of selling services as SAAS. I'd like to sell the tech as a product containing a software, product manual, usage instructions with samples.
The target users are designers, initial feedbacks were some points to address and they are curious and interested.
Having said that it's not SAAS, not subscription based, I'd like to know how to put a price on it as a product.
The Accounting Equation values your business:
Asset Value = Equities + Liabilities
Accounting Equation: https://en.wikipedia.org/wiki/Accounting_equation : Assets = Liabilities + Contributed Capital + Revenue − Expenses − Dividends
Revenue - Expenses
Cash flow is Revenue.Cash flow: https://en.wikipedia.org/wiki/Cash_flow :
> A cash flow CF is determined by its time t, nominal amount N, currency CCY, and account A; symbolically, CF = CF(t, N, CCY, A)
Though CF(A, t, ...) or CF(t, A) may be more search-index optimal, and really A_src_n, A_dest_n [...] ILP Interledger Protocol.
Payment fees are part of the price, unless you're a charity with a donate processing costs too option.
OTOH, though there are already standard departmental accounting chart names:
price_product_a = cost_payment_fees + cost_materials + cost_labor + cost_marketing + cost_sales + cost_support + cost_errors_omissions + cost_chargebacks + cost_legal + cost_future_liabilities + [...]
A CAS Computer Algebra System like {SymPy,Sage} in a notebook can help define inequality relations to bound or limit the solution volume or hyper volume(s)And then unit test functions can assert that a hypothesized model meets criteria for success
But in Python, for example, test_ functions can't return values to the test runner, they would need to write [solution evaluation score] outputs to a store or a file to be collected with the other build artifacts and attached to a GitHub Release, for example.
Eventually, costs = {a: 0, b: 2} so that it's costs[account:str|uint64] instead of cost_account, and then costs = ERP.reports[name,date].archived_output()
Monte Carlo simulation is possible with things like PyMC (MCMC) and TIL about PyVBMC. Agent-based simulation of consumers may or may not be more expensive or the same problem. In behavioral economics, many rational consumers make informed buying decisions.
Looking at causal.app > Templates, I don't see anything that says "pricing" but instead "Revenue" which may imply models for sales, pricing, costs, cost drivers;
> Finance > Marketing-driven SaaS Revenue Model for software companies with multiple products. Its revenue growth is driven by marketing spend.
> Finance > Sales-driven SaaS Revenue > Understand sales rep productivity and forecast revenue accurately
/? site:causal.app pricing model https://www.google.com/search?q=site%3Acausal.app+pricing+mo...
- Blog: "The Pros and Cons of Common SaaS Pricing Models",
- Models: "Simple SaaS pricing calculator",
- /? hn site=causal.app > more lists a number of SaaS metrics posts: https://news.ycombinator.com/from?site=causal.app
/? startupschool pricing: https://www.google.com/search?q=startupschool+pricing
Startup School Curriculum > Ctrl-F pricing: https://www.startupschool.org/curriculum
- "Startup Business Models and Pricing | Startup School" https://youtube.com/watch?v=oWZbWzAyHAE&
/? price sensitivity analysis Wikipedia: https://www.google.com/search?q=price+sensitivity+analysis+W...
Pricing strategies > Models of pricing: https://en.wikipedia.org/wiki/Pricing_strategies#Models_of_p...
Price analysis > Key Aspects, Marketing: https://en.wikipedia.org/wiki/Price_analysis :
> In marketing, price analysis refers to the analysis of consumer response to theoretical prices assessed in survey research
/? site:github.com price sensitivity: https://www.google.com/search?q=site%3Agithub.com+price+sens... :
As a market economist looking at product pricing, given maximal optimization for price And CLV customer lifetime value, what corrective forces will pull the price back down? Macroeconomic forces, competition,
Porter's five forces analysis: https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysi... :
> Porter's five forces include three forces from 'horizontal competition' – the threat of substitute products or services, the threat of established rivals, and the threat of new entrants – and two others from 'vertical' competition – the bargaining power of suppliers and the bargaining power of customers.
> Porter developed his five forces framework in reaction to the then-popular SWOT analysis, which he found both lacking in rigor and ad hoc.[3] Porter's five-forces framework is based on the structure–conduct–performance paradigm in industrial organizational economics. Other Porter's strategy tools include the value chain and generic competitive strategies.
Are there upsells now, later; planned opportunities to produce additional value for the longer term customer relationship?
When and how will you lower the price in response to competition and costs?
How does a pricing strategy vary if at all if a firm is Bootstrapping vs the Bank's money?
/? saas pricing: https://hn.algolia.com/?q=saas+pricing
Rivian reduced electrical wiring by 1.6 miles and 44 pounds
The electrical wiring in cars is Conway's law manifest in copper.
For those unfamiliar with Conway's law, I am arguing that how the car companies have organized themselves--and their budgets--ends up being directly reflected in the number of ECUs as well as how they're connected with each other. I imagine that by measuring the amount of excess copper, you´d have a pretty good measure for the overhead involved in the project management from the manufacturers' side.
(I previously worked for Daimler)
Would be funny to have a ‘mm Cu / FTE’ scaling law.
The ohm*meter^3 is the unit of electrical resistance.
Electrical resistivity and conductivity: https://en.wikipedia.org/wiki/Electrical_resistivity_and_con...
Is there a name for Wh/m or W/m (of Cu or C) of loss? Just % signal loss?
"Copper Mining and Vehicle Electrification" (2024) https://www.ief.org/focus/ief-reports/copper-mining-and-vehi... .. https://news.ycombinator.com/item?id=40542826
There's already conductive graphene 3d printing filament (and far less conductive graphene). Looks like 0.8ohm*cm may be the least resistive graphene filament available: https://www.google.com/search?q=graphene+3d+printer+filament...
Are there yet CNT or TWCNT Twisted Carbon Nanotube substitutes for copper wiring?
A high energy hadron collider on the Moon
Could this be made out of a large number of starships that land on the moon and have a magnet payload? Would require something on the order of 1000 starships 50 meters tall but only about 100 starships 100 meters tall. Perhaps an existing starship with a payload that extends a magnet another 50 meters higher after landing. Has Musk already thought about this? Correction - would require about 500 starships 100 meters tall. ChatGpt 4o mini is even worse than me at this line of sight math! Taller towers could be built to extend upwards out of a starship payload especially if guywires are used for stability. Magnets would require shielding from the Sun and reflections from the lunar surface similar to James Webb but far less demanding to achieve a temperature suitable for superconducting magnets. Obviously solar power with batteries to enable lunar nighttime operation. How many magnets are there in the LHC? How exactly circular or not does it need to be?
Next thought is to build it in space where there can be unlimited expansion of the ring diameter. Problem would be focusing the beam by aiming many freely floating magnets. Would require some form of electronic beam aiming that can correct for motion of the magnets. My EM theory is too rusty (50 years since I got a C in the course) to figure out what angle a proton beam can be bent by one magnet at these energies and so how many magnets would be required.
NotebookLLM is designed for this math.
First thoughts without reading the paper: there's too much noise and the moon is hollow (*), so even if you could assemble it [a particle collider] by bootstrapping with solar and thermoelectric and moon dirt and self-replicating robots, the rare earth payload cost is probably a primary cost driver barring new methods for making magnets from moon rock rare earths.
Is the moon hollow?
Lunar seismology: https://en.wikipedia.org/wiki/Lunar_seismology :
> NASA's Planetary Science Decadal Survey for 2012-2022 [12] lists a lunar geophysical network as a recommended New Frontiers mission. [...]
> NASA awarded five DALI grants in 2024, including research on ground-penetrating radar and a magnometer system for determining properties of the lunar core. [14]
Spellcheck says "magnometer" is not even a word.
But how much radiation noise is there from solar and cosmic wind on the surface of the moon - given the moon's lack of magnetosphere and partially thus also its lack of atmosphere - or shielded by how many meters of moon; underground in dormant lava tubes or vents?
> Structure of the Lunar Interior: The solid core has a radius of about 240 km and is surrounded by a much thinner liquid outer core with a thickness of about 90 km.[9] The partial melt layer sits above the liquid outer core and has a thickness of about 150 km. The mantle extends to within 45 ± 5 km of the lunar surface.
What are the costs to drill out the underground collider ring at what depth and but first to assemble the moon-drilling unit(s) from local materials just and given energy (and thus on-the-moon production systems therefore)
Presumably only the collision area needs to be underground to minimize noise. The rest of the ring can be above the surface. Building things on the moon is hard especially materials much easier to ship them from Earth using Starships or a derivative. Can start with a small ring made from a few ships and build a bigger one and finally a great circle one. Actually only need 500 or so towers 100 meters high each supporting a magnet. Starship can deliver 100 metric tons to LEO so could deliver a significant fraction of that to the lunar surface. Maybe 25 Starships performing 25 lunar missions each would be enough assuming LEO refueling and payload transfer.
Quantum Cryptography Has Everyone Scrambling
This is an article about QKD, a physical/hardware encryption technology that, to my understanding, cryptography engineers do not actually take seriously. It's not about post-quantum encryption techniques (PQC) like structured lattices.
As far as I can tell, QKD is mainly useful to tell people to separate out the traffic that is really worth snooping onto a network that is hard to snoop on. It is much harder for someone to spy on your secrets when they don't pass through traditional network devices.
Of course, it also tells a dedicated adversary with L1 access exactly where to tap your cables.
Show HN: I've spent nearly 5y on a web app that creates 3D apartments
Funny, I worked on something similar. My first job we needed to build up some fancy 3D alerting system for a client. I found a threejs based project on GitHub (I think it was called BluePrint 3D) and pitched it to my boss, it saved me screaming for help at figuring out how to build the same things in three JS with zero experience, but also saved us hundreds of hours to rebuild the same thing. It looked somewhat like this tool, though I'm sure this ones way more polished.
It too had a 2D editor for 3D, it was cool, but we were just building floorplans and displaying live data on those floor plans, so all the useful design stuff was scrapped for the most part. This looks nicely polished, good job.
It was a painful project due to the client asking for things that were just... well they were insane.
Neural radiance fields: https://en.wikipedia.org/wiki/Neural_radiance_field :
> A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020,[1] it has since gained significant attention for its potential applications in computer graphics and content creation.[2]
- https://news.ycombinator.com/item?id=38636329
- Business Concept: A b2b business idea from awhile ago expanded: "Adjustable Wire Shelving Emporium"; [business, office supply] client product mix 3D configurator with product placement upsells; from photos of a space to products that would work there. Presumably there's already dimensional calibration with a known-good dimension or two;
- ENH: the tabletop in that photo is n x m, so the rest of the objects are probably about
- ENH: the building was constructed in YYYY in Locality, so the building code there then said that the commercial framing studs should be here and the cables and wiring should be there.
My issue was moreso I had 0 experience with 3D and they wanted me to slap something together with threejs. I didn't even know JavaScript that well at the time. That is a pretty cool project though.
Show HN: BudgetFlow – Budget planning using interactive Sankey diagrams
Nice use of Sankey. Here's an ask or thought, I would like to feed this automatically with my spend data from my bank account. So I can export a csv that has time-date, entity, amount (credit/debit) - would be great if it could spit this out by category (where perhaps an llm could help with this task). I would then like flow to be automated monthly or perhaps quarterly with major deltas (per my definition) then pushed to me via alerts. So this isn't so much active budget management but passive nudging where the shape of my spend changes something I don't look at now but would like to.
If you have a CSV and are happy using LLM, then you should ask an LLM to give you python code to generate a sankey plot. It's not difficult to wrangle.
The Plaid API JSON includes inferred transaction categories.
OFX is one alternative to CSV for transaction data.
ofparse parses OFX: https://pypi.org/project/ofxparse/
W3C ILP Interledger protocol is for inter-ledger data for traditional and digital ledgers. ILP also has a message spec for transactions.
Scientists convert bacteria into efficient cellulose producers
> A new approach has been presented by the research group led by André Studart, Professor of Complex Materials at ETH Zurich, using the cellulose-producing bacterium Komagataeibacter sucrofermentans. [...]
> K. sucrofermentans naturally produces high-purity cellulose, a material that is in great demand for biomedical applications and the production of packaging material and textiles. Two properties of this type of cellulose are that it supports wound healing and prevents infections.
From "Cellulose Packaging: A Biodegradable Alternative" https://www.greencompostables.com/blog/cellulose-packaging :
> What is cellulose packaging? Cellulose packaging is made from cellulose-based materials such as paper, cardboard, or cellophane.
> Cellulose can also be used to produce a type of bioplastic that is biodegradable and more environmentally friendly than petroleum-based plastics. These bioplastics can be utilized to make food containers, bottles, cups or trays.
> Can cellulose replace plastic? With an annual growth rate of 5–10% over the past decade, cellulose is already at the forefront of replacing plastic in everyday use.
> Cellophane, especially, is set to replace plastic film packaging soon. According to a Future Market Insights report, cellulose packaging will have a compound annual growth rate of 4.9% between 2018 and 2028.
Is there already cling wrap cellophane? Silicone lids are washable, freezable, microwaveable
Open Source Farming Robot
It sill looks like the software is written by people who don't know how to care for plants. You don't spray water on leaves as shown in the video; you'll just end up with fungus infestation. You water the soil and nourish the microorganisms that facilitate nutrient absorption in roots. But, I don't see any reason the technology can't be adapted to do the right thing.
Probably?
But spraying water on leaves is not only the way water naturally gets to plants, it's often the only practical way to water crops at scale. Center-pivot irrigation has dramatically increased the amount of and reliability of arable cropland, while being dramatically less sensitive to topography and preparation than flood irrigation.
The advice to "water the soil, not the leaves" is founded in manual watering regimes in very small-scale gardening, often with crops bred to optimize for unnaturally prolific growth at the cost of susceptibility to fungal diseases, but which are still immature, exposing the soil. Or with transplanted bushes and trees where you have full access to the entire mulch bed. And it's absolutely a superior method, in those instances... but it's not like it's a hard-and-fast rule.
We can extend the technique out to mid-size market gardens with modern drip-lines, at the cost of adding to the horrific amounts of plastic being constantly UV-weathered that we see in mid-size market gardens.
Drip irrigation is kind of a thing out here in the desert...
And then there is burried drip irrigation...
Yes, but as GP said that doesn't scale. I live in an agriculture heavy community in the desert (mountain-west USA), and drip irrigation is only really used for small gardens and landscaping. Anyone with an acre or more of crops is not using drip.
I certainly agree that drip is the ideal, and when you aren't doing drip you want to minimize the standing water on leaves, but if I were designing this project I would design for scale.
But drip irrigation doesn’t scale because you would need to lay + connect + pressurize + maintain hundreds of miles of hoses. It’s high-CapEx.
A “watering robot”, meanwhile, can just do what a human gardener does to water a garden, “at scale.”
Picture a carrot harvester-alike machine — something whose main body sits on a dirt track between narrow-packed row-groups, with a gantry over the row-group supported by narrow inter-row wheels. Except instead of picker arms above the rows, this machine would have hoses hanging down between each row (or hoses running down the gantry wheels, depending on placement) with little electronic valve-boxes on the ends of the hoses, and side-facing jet nozzles on the sides of the valve boxes. The hoses stay always-fully-pressurized (from a tank + compressor attached to the main body); the valves get triggered to open at a set rate and pulse-width, to feed the right amount of water directly to the soil.
“But isn’t the ‘drip’ part of drip irrigation important?” Not really, no! (They just do it because constant passive input is lazy and predictable and lower-maintenance.) Actual rain is very bursty, so most plants (incl. crops) aren’t bothered at all by having their soil periodically drenched and then allowed to dry out again, getting almost bone dry before the next drenching. In fact, everything other than wetland crops like rice prefer this; and the dry-out cycles decrease the growth rates for things like parasitic fungi.
As a bonus, the exact same platform could perform other functions at the same time. In fact, look at it the other way around: a “watering robot” is just an extension of existing precision weeding robots (i.e. the machines designed to reduce reliance on pesticides by precision-targeting pesticide, or clipping/picking weeds, or burning/layering weeds away, or etc.) Any robot that can “get in there” at ground level between rows to do that, can also be made to water the soil while it’s down there.
Fair point, the robot could lower its nozzle to the ground and jet the water there, much like a human would, with probably not a lot of changes required. That does seem like it would be a good optimization.
Isn't it better to mist plants, especially if you can't delay watering due to full sun?
IIUC that's what big box gardening centers do; with fixed retractable hoses for misting and watering.
A robot could make and refill clay irrigation Ollas with or without microsprinkler inlets and level sensing with backscatter RF, but do Ollas scale?
Why have a moving part there at all? Could just modulate spec valves to high and low or better fixed height sprayers
FWIU newer solar weeding robots - which minimize pesticide use by direct substitution and minimize herbicide by vigilant crop monitoring - have fixed arrays instead of moving part lasers
An agricultural robot spec:
Large wheels, light frame, can right itself when terrain topology is misestimated, Tensor operations per second (TOPS), Computer Vision (OpenCV, NeRF,), modular sensor and utility mounts, Open CAD model with material density for mass centroid and ground contact outer hull rollover estimation,
There is a shift underway from ubiquitous tilling to lower and no till options. Tilling solves certain problems in a field - for a while - but causes others, and is relatively expensive. Buried lines do not coexist with tilling.
We are coming to understand a bit more about root biology and the ecosystem of topsoil and it seems like the 20th century approach may have been a highly optimized technique of using a sledgehammer to pound in a screw.
> IIUC that's what big box gardening centers do; with fixed retractab
A notebook with pandas would have had a df.plot().
9.71 Gbps with wg on a 10GBps link with sysctl tunings, custom MTUs,.
I had heard of token ring, but not 10BASE5: https://en.wikipedia.org/wiki/10BASE5