Contents^
Items^
Curl 8.2.0 supports –ca-native and –proxy-ca-native with OpenSSL 3.2 Windows
"OpenSSL Announces Final Release of OpenSSL 3.2.0" https://news.ycombinator.com/item?id=38392887 https://github.com/openssl/openssl/blob/openssl-3.2.0/NEWS.m... :
> Support for using the Windows system certificate store as a source of trusted root certificates
> This is not yet enabled by default and must be activated using an environment variable. This is likely to become enabled by default in a future feature release
openssl/openssl > "Add support for Windows CA certificate store" https://github.com/openssl/openssl/pull/18070/files
How should OS System Cert Store(s) be supported on Linux platforms with OpenSSL and e.g. Curl?
PEP-0543 had TLSConfiguration(..., trust_store=DEFAULT:TrustStore) https://peps.python.org/pep-0543/
class TrustStore() https://peps.python.org/pep-0543/#trust-store
And a CipherSuite() class with params and a heading for each of a number of cipher suites; OpenSSL (*), SecureTransport (MacOS,), SChannel (Windows), NSS (Firefox,); tlsdb https://peps.python.org/pep-0543/#cipher-suites
Curl 8_2_0 release: https://github.com/curl/curl/releases/tag/curl-8_2_0
Curl 8.2.0 changes.html: https://curl.se/changes.html#8_2_0
Why does Gnome fingerprint unlock not unlock the keyring?
> And that's the difference. When you perform TouchID authentication, the secure enclave can decide to release a secret that can be used to decrypt your keyring. We can't easily do this under Linux because we don't have an interface to store those secrets.
> The secret material can't just be stored on disk - that would allow anyone who had access to the disk to use that material to decrypt the keyring and get access to the passwords, defeating the object.
I'll provide a broader explanation: you cannot encrypt your key with a fingerprint.
A genuine "throw away the key" lock is only possible when the decryption key is completely erased from memory during the screen lock, and you cannot use your fingerprint or face image as a key by itself.
A screen lock using stored secret is inherently incapable of providing encryption at rest. It's like a password on a sticky note.
A "throw away the key" is only possible with a password, or a smartcard.
If you trust the hardware then it's entirely possible to tie the release of an encryption key to a fingerprint validated by that hardware. Hardware-backed keys are widely used (the entire WebAuthn ecosystem is predicated upon them being trustworthy), and having that hardware validate a fingerprint rather than merely physical presence is an improvement.
https://news.ycombinator.com/item?id=33311523 :
> [WebAuthn, TPM, U2F/FIDO2, Seahorse,]
> tpm-fido: https://github.com/psanford/tpm-fido :
>> tpm-fido is FIDO token implementation for Linux that protects the token keys by using your system's TPM. tpm-fido uses Linux's uhid facility to emulate a USB HID device so that it is properly detected by browsers.
TPM > TPM software libraries: https://en.wikipedia.org/wiki/Trusted_Platform_Module#TPM_so...
TPM > Virtualization; virtual TPM devices: https://en.wikipedia.org/wiki/Trusted_Platform_Module#Virtua...
WebAuthn: https://en.wikipedia.org/wiki/WebAuthn
Dutch astronomers prove last piece of gas feedback-feeding loop of black hole
I don't understand this article at all. Is it some kind of discovery that gas that exists somewhere can be attracted to a supermassive black hole, or a body with mass of any type? I don't see the relevance to that the gas was once ejected by the black hole; it's well known that mass attracts.
Also:
> Supermassive black holes at the centers of galaxies have long been known to emit enormous amounts of energy. This causes the surrounding gas to heat up and flow far away from the center. This, in turn, makes the black hole less active and lets cool gas, in theory, flow back.
This is a very strange way to word that the immense friction in the accretion disks of black holes creates immense heat and light, is it not? As far as I know, the actual energy black holes emit in the form of hawking radiation is rather very minute and undetectably low with current censors.
I think what they mean is that matter falling into black hole emits A LOT of energy.
One of less well known facts about black holes is that they are best known method to convert mass into energy. Black hole can be used to recover up to 40% of falling mass as energy.
Also, technically, Hawking radiation is not the only way to get the energy out of black hole. You can extract energy from a rotating black hole. Normally, rotational energy contributes to the total mass of the black hole, but you can extract that energy, robbing black hole of some of its rotational energy and consequently, reducing its total mass. Essentially, it can be used to accelerate objects (this is called Penrose process).
When two black holes collide, a portion of their mass is radiated as gravitational waves. This has been verified by LIGO showing that the resulting black hole has less mass than the sum of mass of black holes before collision.
I am also pretty sure that a charged black hole could act as a fantastic battery, except there is no known mechanism that could create a significantly charged one.
Does the OT potentially confirm models of superfluid quantum space that have Bernoulli's, low pressure and vorticity, Gross-Pitaevski,?
https://news.ycombinator.com/item?id=38370118 :
> > "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2017) :
>> [...] Vorticity is interpreted as spin (a particle's internal motion). Due to non-zero, positive viscosity of the SQS, and to Bernoulli pressure, these vortices attract the surrounding quanta, pressure decreases and the consequent incoming flow of quanta lets arise a gravitational potential. This is called superfluid quantum gravity*
Also, https://news.ycombinator.com/item?id=38009426 :
> Can distorted photonic crystals help confirm or reject current theories of superfluid quantum gravity?
"Closing the feedback-feeding loop of the radio galaxy 3C 84" (2023) https://www.nature.com/articles/s41550-023-02138-y
Intuitive guide to convolution
IMO 3blue1brown presents this in a very easy to understand way: https://www.youtube.com/watch?v=KuXjwB4LzSA
I wonder if we will ever be at a stage where LLM can generate videos like that as an ELI5
https://github.com/360macky/generative-manim :
> Generative Manim is a prototype of a web app that uses GPT-4 to generate videos with Manim. The idea behind this project is taking advantage of the power of GPT-4 in programming, the understanding of human language and the animation capabilities of Manim to generate a tool that could be used by anyone to create videos. Regardless of their programming or video editing skills.
"TheoremQA: A Theorem-driven [STEM] Question Answering dataset" (2023) https://github.com/wenhuchen/TheoremQA#leaderboard
How do you score memory retention and video watching comprehension? The classic educators' optimization challenge
"Khan Academy’s 7-Step Approach to Prompt Engineering for Khanmigo" https://blog.khanacademy.org/khan-academys-7-step-approach-t...
"Teaching with AI" https://openai.com/blog/teaching-with-ai
Photomath: https://en.wikipedia.org/wiki/Photomath
"Pix2tex: Using a ViT to convert images of equations into LaTeX code" > latex2sympy: https://news.ycombinator.com/item?id=38138020
SymPy Beta is aWASM fork of SymPy Gamma: https://github.com/eagleoflqj/sympy_beta
SymPy Gamma: https://github.com/sympy/sympy_gamma
TIL i^4x == e^2iπx
import unittest
import sympy as sy
test = unittest.TestCase()
x = sy.symbols("x")
# with test.assertRaises(AssertionError):
test.assertEqual(sy.I**(4*x), sy.E**(2*sy.I*sy.pi*x))
https://www.wolframalpha.com/input/?i=I%5E%284x%29+%3D%3D+e%...S2n-TLS – A C99 implementation of the TLS/SSL protocol
"Continuous formal verification of Amazon s2n" (2018) https://link.springer.com/chapter/10.1007/978-3-319-96142-2_...
https://scholar.google.com/scholar?cites=2686812922904040715...
But formal methods (and TLA+ for distributed computation) don't eliminate side channels.
There have been some attempts at formally verifying lack of timing attacks (to the extent allowed by hardware). This is the one I know the most about: https://dl.acm.org/doi/pdf/10.1145/3314221.3314605 but there are likely others
"FaCT: A DSL for Timing-Sensitive Computation" https://dl.acm.org/doi/pdf/10.1145/3314221.3314605 citations in gscholar: https://scholar.google.com/scholar?cites=1570199926308101856...
Also in around side channel topic for example:
E. Prouff and M. Rivain, Masking against Side-Channel Attacks: A Formal Security Proof, EUROCRYPT 2013, LNCS 7881
S. Dziembowski and K. Pietrzak, "Leakage-Resilient Cryptography, 10.1109/FOCS.2008.56.
Side-channel: https://en.wikipedia.org/wiki/Side-channel_attack
Time complexity > Constant time: https://en.wikipedia.org/wiki/Time_complexity#Constant_time
"Masking against side-channel attacks: A formal security proof" (2013) https://link.springer.com/chapter/10.1007/978-3-642-38348-9_... https://scholar.google.com/scholar?cites=1479355492097437276...
"Leakage-Resilient Cryptography" (2008) https://ieeexplore.ieee.org/abstract/document/4690963 https://scholar.google.com/scholar?cites=5581902451405085906...
Show HN: Demo of Agent Based Model on GPU with CUDA and OpenGL (Windows/Linux)
Demo of agent based model on GPU with CUDA and OpenGL (Windows/Linux)
Agent instances on GPU memory Uses SSBO for instanced objects (with GLSL 450 shaders) CUDA OpenGL interops Renders with GLFW3 window manager Dynamic camera views in OpenGL (pan,zoom with mouse) Libraries installed using vcpkg
(https://github.com/KienTTran/ABMGPU)
Could this work in WebGL and/or WebGPU (and/or WebNN) with or without WASM in a browser?
https://stackoverflow.com/questions/48228192/webgl-compute-s...
https://github.com/conda-forge/glfw-feedstock/blob/main/reci...
pyglfw: https://github.com/conda-forge/pyglfw-feedstock/blob/main/re...
- [ ] glfw recipe for emscripten-forge: https://github.com/emscripten-forge/recipes/tree/main/recipe...
Emscripten porting docs > OpenGL ES 2.0/3.0 *, glfw: https://emscripten.org/docs/porting/multimedia_and_graphics/...
WebGPI API > GPUBuffer: https://developer.mozilla.org/en-US/docs/Web/API/WebGPU_API
gpuweb/gpuweb: https://github.com/gpuweb/gpuweb
https://news.ycombinator.com/item?id=38355444 :
> It actually looks like pygame-web (pygbag) supports panda3d and harfang in WASM
Harfang and panda3d do 3D with WebGL, but FWIU not yet agents in SSBO/VBO/GPUBuffer.
SSBO: Shader Storage Buffer Object: https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Ob...
/? WebGPU compute: https://www.google.com/search?q=webgpu+compute
"WebGPU Compute Shader Basics" https://webgpufundamentals.org/webgpu/lessons/webgpu-compute...
Inoculating soil with mycorrhizal fungi can increase plant yield: study
Something I thought was really cool is that in Korean Natural Farming [1] there is a technique where you go to the forest near your farm and gather a bunch of decomposing leaves and other matter from the forest floor, then take it at home and throw it in a tub of I think starchy water to feed the microorganisms in the material you collected. You incubate this for a while and then spread this water over your soil. The motivation for this practice is the idea that your local forest has self-selected to grow well in your particular location, and those bacteria will be very helpful. I think it's an excellent innovation. It is both very simple for poor farmers all over the world to do and based on good scientific principles.
Mycorrhiza in soil help plant roots absorb nutrients.
Mycorrhiza: https://en.wikipedia.org/wiki/Mycorrhiza
Leaf mold: https://en.wikipedia.org/wiki/Leaf_mold
KNF > Indigenous microorganisms: https://en.wikipedia.org/wiki/Korean_natural_farming#Indigen...
FWIU JWA is very similar to Castille soap?
From https://news.ycombinator.com/item?id=37171603 :
> A (soap-like) surfactant like JADAM Wetting Agent (JWA) which causes the applied treatments to stick to the plants might reduce fertilizer runoff levels; but Nitrogen-based fertilizer alone does not regenerate all of the components of topsoil. https://www.google.com/search?q=jadam+jwa
https://youtube.com/@JADAMORGANIC
> Mycorrhizae fungus in the soil help get nutrients to plant roots, and they need to be damp in order to prevent soil from turning to dirt due to solar radiation and oxidation. https://youtube.com/@soilfoodwebschool
Yield and Soil Fertility are valuable criteria to optimize for; with multi-criteria optimization.
Crop yield: https://en.wikipedia.org/wiki/Crop_yield
Soil fertility > Soil depletion: https://en.wikipedia.org/wiki/Soil_fertility#Soil_depletion
GDlog: A GPU-accelerated deductive engine
Datalog related things really seem to have momentum these days.
Interesting that the application areas seem so different. This talks mostly about specialized source code analysis applications, whereas eg in Clojure circles it's used in normal application databases. I wonder if there'd be a way to use this as a backend for eg XTDB or Datalevin.
Yeah, there are a ton of substantively different approaches to modern Datalogs, targeting different applications.
To start off: Datalog is distinguished from traditional SQL in its focus on heavily-recursive reachability-based reasoning. With respect to expressivity, you can see Datalog as CDCL/DPLL restricted to boolean constraint propagation (i.e., Horn clauses). Operationally, you can think of this as: tight range-indexed loops which are performing insertion/deduplication into an (indexed) relation-backing data structure (a BTree/trie/etc...). In SQL, you don't know the query a-priori, so you can't just index everything--but in Datalog, you know all of the rules up-front and can generate indices for everything. This ubiquitous indexing enables the state-of-the-art work we see with Datalog in static analysis (DOOP, cclyzer), security (ddisasm), etc...
Our group targets tasks like code analysis and these big graph problems because we think they represent the most computationally-complex, hard problems that we are capable of doing. The next step here is to scale our prototypes (a handful of rules) to large, realistic systems--some potential applications of that are, e.g., raw feature extraction for binaries when you do ML over binary corpuses (which otherwise require, e.g., running IDA) on the GPU (rather than IDA on the CPU), medical reasoning (accelerating MediKanren), and (hopefully) probabilistic programming (these neuro-symbolic applications).
By contrast, I think work which takes a more traditional Databases approach (CodeQL, RDFox, ...) focus a little less on ubiquitous high-performance range-indexed insertion in a tight loop, and focus a little more on supporting robust querying and especially operating on streams. There is some very cool related work there in differential dataflow (upon which differential Datalog is built). There is a solver there named DDlog (written in Rust) which takes that approach. Our in-house experiments show that DDlog is often a constant factor slower than Souffle on GPUs, and we did not directly compare against DDlog in this paper--I expect the results would be roughly similar to Souffle.
"Introduction to Datalog" re: Linked Data https://news.ycombinator.com/context?id=34808887
pyDatalog/examples/SQLAlchemy.py: https://github.com/baojie/pydatalog/blob/master/pyDatalog/ex...
GH topics > datalog: https://github.com/topics/datalog
datalog?l=rust: https://github.com/topics/datalog?l=rust ... Cozo, Crepe
Crepe: https://github.com/ekzhang/crepe :
> Crepe is a library that allows you to write declarative logic programs in Rust, with a Datalog-like syntax. It provides a procedural macro that generates efficient, safe code and interoperates seamlessly with Rust programs.
Looks like there's not yet a Python grammar for the treeedb tree-sitter: https://github.com/langston-barrett/treeedb :
> Generate Soufflé Datalog types, relations, and facts that represent ASTs from a variety of programming languages.
Looks like roxi supports n3, which adds `=>` "implies" to the Turtle lightweight RDF representation: https://github.com/pbonte/roxi
FWIW rdflib/owl-rl: https://owl-rl.readthedocs.io/en/latest/owlrl.html :
> simple forward chaining rules are used to extend (recursively) the incoming graph with all triples that the rule sets permit (ie, the “deductive closure” of the graph is computed).
ForwardChainingStore and BackwardChainingStore implementations w/ rdflib in Python: https://github.com/RDFLib/FuXi/issues/15
Fast CUDA hashmaps
Gdlog is built on CuCollections.
GPU HashMap libs to benchmark: Warpcore, CuCollections,
https://github.com/NVIDIA/cuCollections
https://github.com/NVIDIA/cccl
https://github.com/sleeepyjack/warpcore
/? Rocm HashMap
DeMoriarty/DOKsparse: https://github.com/DeMoriarty/DOKSparse
/? SIMD hashmap
Google's SwissTable: https://github.com/topics/swisstable
rust-lang/hashbrown: https://github.com/rust-lang/hashbrown
CuPy has array but not yet hashmaps, or (GPU) SIMD FWICS?
NumPy does SIMD: https://numpy.org/doc/stable/reference/simd/
google/highway: https://github.com/google/highway
xtensor-stack/xsimd: https://github.com/xtensor-stack/xsimd
GH topics > HashMap: https://github.com/topics/hashmap
Yeah, IncA (compiling to Souffle) and Ascent (I believe Crepe is not parallel, though also a good engine) are two other relevant cites here. Apropos linked data, our group has an MPI-based engine which is built around linked facts (subsequently enabling defunctionalization, ad-hoc polymorphism, etc..), which is very reminiscent of the discussion in the first link of yours: https://arxiv.org/abs/2211.11573
TIL about Approximate Reasoning.
"Approximate Reasoning for Large-Scale ABox in OWL DL Based on Neural-Symbolic Learning" (2023) > Parameter Settings of the CFR [2023 ChunfyReasoner] and NMT4RDFS [2018] in the Experiments. https://www.researchgate.net/figure/Parameter-Settings-of-th...
"Deep learning for noise-tolerant RDFS reasoning" (2018) > NMT4RDFS: http://www.semantic-web-journal.net/content/deep-learning-no... :
> This paper documents a novel approach that extends noise-tolerance in the SW to full RDFS reasoning. Our embedding technique— that is tailored for RDFS reasoning— consists of layering RDF graphs and encoding them in the form of 3D adjacency matrices where each layer layout forms a graph word. Each input graph and its entailments are then represented as sequences of graph words, and RDFS inference can be formulated as translation of these graph words sequences, achieved through neural machine translation. Our evaluation on LUBM1 synthetic dataset shows 97% validation accuracy and 87.76% on a subset of DBpedia while demonstrating a noise-tolerance unavailable with rule-based reasoners.
NMT4RDFS: https://github.com/Bassem-Makni/NMT4RDFS
...
A human-generated review article with an emphasis on standards; with citations to summarize:
"Why do we need SWRL and RIF in an OWL2 world?" [with SPARQL CONSTRUCT, SPIN, and now SHACL] https://answers.knowledgegraph.tech/t/why-do-we-need-swrl-an...
https://spinrdf.org/spin-shacl.html :
> From SPIN to SHACL In July 2017, the W3C has ratified the Shapes Constraint Language (SHACL) as an official W3C Recommendation. SHACL was strongly influenced by SPIN and can be regarded as its legitimate successor. This document explains how the two languages relate and shows how basically every SPIN feature has a direct equivalent in SHACL, while SHACL improves over the features explored by SPIN
/? Shacl datalog https://www.google.com/search?q=%22shacl%22+%22datalog%22
"Reconciling SHACL and Ontologies: Semantics and Validation via Rewriting" (2023) https://scholar.google.com/scholar?q=Reconciling+SHACL+and+O... :
> SHACL is used for expressing integrity constraints on complete data, while OWL allows inferring implicit facts from incomplete data; SHACL reasoners perform validation, while OWL reasoners do logical inference. Integrating these two tasks into one uniform approach is a relevant but challenging problem.
"Well-founded Semantics for Recursive SHACL" (2022) [and datalog] https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A169...
"SHACL Constraints with Inference Rules" (2019) https://arxiv.org/abs/1911.00598 https://scholar.google.com/scholar?cites=1685576975485159766...
Datalog > Evaluation: https://en.wikipedia.org/wiki/Datalog#Evaluation
...
VMware/ddlog: Differential datalog
> Bottom-up: DDlog starts from a set of input facts and computes all possible derived facts by following user-defined rules, in a bottom-up fashion. In contrast, top-down engines are optimized to answer individual user queries without computing all possible facts ahead of time. For example, given a Datalog program that computes pairs of connected vertices in a graph, a bottom-up engine maintains the set of all such pairs. A top-down engine, on the other hand, is triggered by a user query to determine whether a pair of vertices is connected and handles the query by searching for a derivation chain back to ground facts. The bottom-up approach is preferable in applications where all derived facts must be computed ahead of time and in applications where the cost of initial computation is amortized across a large number of queries.
From https://community.openlinksw.com/t/virtuoso-openlink-reasoni... https://github.com/openlink/virtuoso-opensource/issues/660 :
> The Virtuoso built-in (rule sets) and custom inferencing and reasoning is backward chaining, where the inferred results are materialised at query runtime. This results in fewer physical triples having to exist in the database, saving space and ultimately cost of ownership, i.e., less physical resources are required, compared to forward chaining where the inferred data is pre-generated as physical triples, requiring more physical resources for hosting the data.
FWIU it's called ShaclSail, and there's a NotifyingSail: org.eclipse.rdf4j.sail.shacl.ShaclSail: https://rdf4j.org/javadoc/3.2.0/org/eclipse/rdf4j/sail/shacl...
"GDlog: A GPU-Accelerated Deductive Engine" (2023) https://arxiv.org/abs/2311.02206 :
> Abstract: Modern deductive database engines (e.g., LogicBlox and Soufflé) enable their users to write declarative queries which compute recursive deductions over extensional data, leaving their high-performance operationalization (query planning, semi-naïve evaluation, and parallelization) to the engine. Such engines form the backbone of modern high-throughput applications in static analysis, security auditing, social-media mining, and business analytics. State-of-the-art engines are built upon nested loop joins over explicit representations (e.g., BTrees and tries) and ubiquitously employ range indexing to accelerate iterated joins. In this work, we present GDlog: a GPU-based deductive analytics engine (implemented as a CUDA library) which achieves significant performance improvements (5--10x or more) versus prior systems. GDlog is powered by a novel range-indexed SIMD datastructure: the hash-indexed sorted array (HISA). We perform extensive evaluation on GDlog, comparing it against both CPU and GPU-based hash tables and Datalog engines, and using it to support a range of large-scale deductive queries including reachability, same generation, and context-sensitive program analysis . Our experiments show that GDlog achieves performance competitive with modern SIMD hash tables and beats prior work by an order of magnitude in runtime while offering more favorable memory footprint.
Towards accurate differential diagnosis with large language models
Differential diagnosis > Machine differential diagnosis: https://en.wikipedia.org/wiki/Differential_diagnosis
CDSS: Clinical Decision Support System: https://en.wikipedia.org/wiki/Clinical_decision_support_syst...
Treatment decision support: https://en.wikipedia.org/wiki/Treatment_decision_support :
> Treatment decision support consists of the tools and processes used to enhance medical patients’ healthcare decision-making. The term differs from clinical decision support, in that clinical decision support tools are aimed at medical professionals, while treatment decision support tools empower the people who will receive the treatments
AI in healthcare: https://en.wikipedia.org/wiki/Artificial_intelligence_in_hea...
[deleted]
Paper vs. devices: Brain activation differences during memory retrieval (2021)
Algorithmically and physically-biologically, Spreading activation has a (constant?) decay term: https://en.wikipedia.org/wiki/Spreading_activation
An easy-sounding problem yields numbers too big for our universe
The concept of reachability is pretty interesting in general as well.
"Can a spaceship with a certain delta-v reach another planet within an n-body system? (And if so, what is the fastest-to target/most resource preserving acceleration schedule?)" - apparently necessitates brute force, practically not computable on long time scales due to the chaos inherent in n-body systems (https://space.stackexchange.com/questions/64392/escaping-ear..., https://en.wikipedia.org/wiki/N-body_problem)
"Can a math proof be reached (within a certain number of proof steps) from the axioms?" - equivalent to the halting problem in most practical systems (https://math.stackexchange.com/questions/3477810/estimating-...)
"Can a demoscene program with a very limited program size visualize (or codegolf program output) something specific?" - asking for nontrivial properties like this usually requires actually running each program, and there are unfathomably many short programs (https://www.dwitter.net/ is a good example of this)
"In cookie-clicker games, is it possible to go above a certain score within a certain number of game ticks using some sequence of actions?" - in all but the simplest and shortest games (like https://qewasd.com), this is at least not efficiently (optimally) solvable using MILP and the like, as the number of possible action sequences increases exponentially
And yet, despite these being really hard (or in the general case, impossible) problems, humans use some heuristics to achieve progress
Quantum Discord: https://en.wikipedia.org/wiki/Quantum_discord
Quantum nonlocality: https://en.wikipedia.org/wiki/Quantum_nonlocality
Butterfly effect -> Quantum chaos: https://en.wikipedia.org/wiki/Quantum_chaos
-> Perturbation theory: https://en.wikipedia.org/wiki/Perturbation_theory_(quantum_m...
But then entropy in fluids,
Satisfiability > Model Theory: https://en.wikipedia.org/wiki/Satisfiability
Goal programming: https://en.wikipedia.org/wiki/Goal_programming
And, ultimately,
Self play (AlphaZero, ) https://en.wikipedia.org/wiki/Self-play
"Q: LLM and/or an RL agent trained on [Lean mathlib] and tests" https://github.com/leanprover-community/mathlib/issues/17919
Unsupervised speech-to-speech translation from monolingual data
Looking forward to the debate about real-time translators censoring or altering people's speech.
Also the debate about whether all human speech need be piped through such a preemptive filter. (actually not looking forward to this one). Suddenly everything that anyone says will be couched with "it is important to consult a professional to ensure safety and compliance with local regulations".
> Looking forward to the debate about real-time translators censoring or altering people's speech.
[Obama's] "Anger translator" (2012) https://www.youtube.com/results?sp=mAEA&search_query=anger+t...
A citizen ostensibly forfeits their right to sue for defamation when they become a public figure; but counter-non-fraud isn't fraud either then eh.
Say "that's not enhanced" just like the old one please.
Eye-safe laser technology to diagnose traumatic brain injury in minutes
"Window into the mind: Advanced handheld spectroscopic eye-safe technology for point-of-care neurodiagnostic" (2023) https://www.science.org/doi/10.1126/sciadv.adg5431
Tiny black holes could theoretically be used as a source of power: study
"Using black holes as rechargeable batteries and nuclear reactors" (2023) https://arxiv.org/abs/2210.10587 :
> Abstract: This paper proposes physical processes to use a Schwarzschild black hole as a rechargeable battery and nuclear reactor. As a rechargeable battery, it can at most transform 25\% of input mass into available electric energy in a controllable and slow way. We study its internal resistance, efficiency of discharging, maximum output power, cycle life and totally available energy. As a nuclear reactor, it realizes an effective nuclear reaction `α particles+black hole→positrions+black hole` and can transform 25\% mass of α-particle into the kinetic energy of positrons. This process amplifies the available kinetic energy of natural decay hundreds of times. Since some tiny sized primordial black holes are suspected to have an appreciable density in dark matters, the result of this paper implies that such black-hole-originated dark matters can be used as reactors to supply energy.
Isn't there a potential for net relative displacement and so thus couldn't (microscopic) black holes be a space drive?
Is there a known inverse transformation for Hawking radiation, and isn't there such radiation from all things?
Don't black holes store a copy of everything, like reflections from water droplets?
PBS Spacetime estimates that there are naturally occurring microscopic black holes every 30 km on Earth. https://news.ycombinator.com/item?id=33483002 https://westurner.github.io/hnlog/#comment-33483002
And ER=EPR: https://twitter.com/westurner/status/964069567290073089
What are the critical conditions for naturally-ocurring and lab or particle-collider made black holes? Are there safety concerns, and how would that be perceived?
> Are there safety concerns, and how would that be perceived?
What scale black hole would affect Auroras and e.g. the ionosphere and greater magnetosphere?
Aurora > Causes: https://en.wikipedia.org/wiki/Aurora
Clang now makes binaries an original Pi B+ can't run
[flagged]
It sounds like clang running on the RPi 1 generates code that doesn't run. Usually compilers default to targeting whatever ISA it's running on but that doesn't seem to be the case here.
Usually compilers default to the ISA they were built on. In the case of Clang this is controlled by LLVM_DEFAULT_TARGET_TRIPLE cmake option - maybe a weird mix of options occurred where Clang for armv6 was built on armv7 but the default triple was not adjusted correctly.
Current docs about this option:
> LLVM target to use for code generation when no target is explicitly specified. It defaults to “host”, meaning that it shall pick the architecture of the machine where LLVM is being built. If you are building a cross-compiler, set it to the target triple of your desired architecture.
raspberrypi.com/documentation/computers/linux_kernel.html#cross-compiling-the-kernel: https://www.raspberrypi.com/documentation/computers/linux_ke...
89luca89/distrobox: https://github.com/89luca89/distrobox #quick-start
89luca89/distrobox/blob/main/docs/useful_tips.md#using-a-different-architecture: https://github.com/89luca89/distrobox/blob/main/docs/useful_...
lukechilds/dockerpi: https://github.com/lukechilds/dockerpi : RPi 1, (2,3,) in QEMU emulating ARM64 on x86_64
E.g. the Fedora Silverblue rpm-ostree distro has "toolbox" by default because most everything should be in a container
containers/toolbox: https://github.com/containers/toolbox
From https://containertoolbx.org/distros/ :
> Distro support: By default, Toolbx creates the container using an OCI image called `<ID>-toolbox:<VERSION-ID>`, where <ID> and <VERSION-ID> are taken from the host’s `/usr/lib/os-release`. For example, the default image on a Fedora 36 host would be `fedora-toolbox:36`.
> This default can be overridden by the `--image` option in `toolbox create`, but operating system distributors should provide an adequately configured default image to ensure a smooth user experience.
The compiler arch flags might should be correctly specified in a "toolbox" container used for cross-compilation, too.
There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).
Ask HN: Can qubits be written to crystals as diffraction patterns?
Coherence time is the operative limit to SOTA QC Quantum Computing.
Holographic data storage already solves for binary digital data storage.
Aren't diffraction patterns due to lattice irregularities effective wave functions?
Presumably holography would have already solved for quantum data storage if diffraction is a sufficient analog if a wave function?
No
What is the limit?
Is a diffraction pattern a wave function?
E.g. a scratched microscope slide is all wave functions.
Can (crystal,) lattices be constructed and/or modified to store diffraction patterns with sufficient coherence over time?
Do Black Holes Have Singularities?
It was an interesting read, mostly though it says what we already know: the mathematics of black holes often produce singularities, but we have no evidence to support the notion that they exist. The thing is whatever is behind the event horizon of a black hole might as well not exist, we can't report observations of it ever. Even in principle.
If someday we know what's inside the event horizon, it will emerge from a robust theory of quantum gravity that can be tested at a lower limit.
Wow, I'm like a broken record today. [1]
SQS and SQG do purport to describe the interior topology of black holes.
Shouldn't it be possible to infer the state and relative position of matter/information/energy in a black hole from the Hawking radiation and/or the post-end-stage positions after "dissolution" of such phenomena in the quantum foam?
There's no positive proof of the irreversibility of such thermodynamic transformations.
How many possible subsequent positions of matter could there be after a microscopic or supermassive black hole reaches "critical condition 2"?
From https://news.ycombinator.com/item?id=38452488 :
> Isn't there a potential for net relative displacement and so thus couldn't (microscopic) black holes be a space drive?
> * Is there a known inverse transformation for Hawking radiation, and isn't there such radiation from all things? *
> Don't black holes store a copy of everything, like reflections from water droplets?
> PBS Spacetime estimates that there are naturally occurring microscopic black holes every 30 km on Earth.
Electricity flows like water in 'strange metals,' and physicists don't know why
"Shot noise in a strange metal" (2023) https://www.science.org/doi/10.1126/science.abq6100 :
> Abstract: Strange-metal behavior has been observed in materials ranging from high-temperature superconductors to heavy fermion metals. In conventional metals, current is carried by quasiparticles; although it has been suggested that quasiparticles are absent in strange metals, direct experimental evidence is lacking. We measured shot noise to probe the granularity of the current-carrying excitations in nanowires of the heavy fermion strange metal YbRh2Si2. When compared with conventional metals, shot noise in these nanowires is strongly suppressed. This suppression cannot be attributed to either electron-phonon or electron-electron interactions in a Fermi liquid, which suggests that the current is not carried by well-defined quasiparticles in the strange-metal regime that we probed. Our work sets the stage for similar studies of other strange metals.
Strange metal -> Fermi liquid theory > Non-Fermi liquid: https://en.wikipedia.org/wiki/Fermi_liquid_theory#Non-Fermi_... :
> The term non-Fermi liquid, also known as "strange metal", [20] is used to describe a system which displays breakdown of Fermi-liquid behaviour.
Google DeepMind's new AI tool helped create more than 700 new materials
"Millions of new materials discovered with deep learning" (2023) https://deepmind.google/discover/blog/millions-of-new-materi...
"Scaling deep learning for materials discovery" (2023) https://www.nature.com/articles/s41586-023-06735-9
Optical effect advances quantum computing with atomic qubits to a new dimension
Does anybody know how the coupling between the individual qubits is achieved/configured?
You can find the paper here, maybe that helps: https://arxiv.org/abs/1902.05424
"Scalable multilayer architecture of assembled single-atom qubit arrays in a three-dimensional Talbot tweezer lattice" (2023) https://arxiv.org/abs/1902.05424
Paperless-Ngx v2.0.0
I haven't been using it too much yet but I am really impressed by paperless-ngx so far. It just works(TM) and the auto-tagging functionality is surprisingly good, even with just a few documents in it.
Does anyone have a good scanner recommendation though? I am eyeing the Brother ADS-1700W since it seems to be recommended often, but I would really like to use the "scan to webhook" feature (it's 2023 after all) instead of SMTP or whatever else are the options I would have with the Brother.
Recommendation: https://www.quickscanapp.com/
I am using iPhone as a scanner and it automatically scans, OCRs, uploads and ingests to the paperless-ngx instance, even remotely using tailscale.
The iPhone camera is more than good enough for scanning documents.
I don't have an iPhone, but on Android there is the "Paperless Mobile" app (https://github.com/astubenbord/paperless-mobile), which can be used to scan as well. There are just some documents that I would prefer to have in proper and consistent "document scanner"-quality; I am always having a hard time with lighting using those phone scanners (although Paperless Mobile is one of the better ones I have used).
Would a document capture camera with a [ring] light also work?
Those still have the speed disadvantage of a phone camera and need more space than a compact document scanner, I'd imagine. I guess a ring light for my phone would be an improvement; using the builtin flash usually leads to very uneven lighting in the scan.
The Nineteenth-Century Banjo
"Tales from the Acoustic Planet, Vol. 3: Africa Sessions" (2009) [2] is the Soundtrack for The Bela Fleck "Throw Down Your Heart" (2009) [1] rockumentary
[2] https://en.wikipedia.org/wiki/Tales_from_the_Acoustic_Planet...
Banjo: https://en.wikipedia.org/wiki/Banjo
Bluegrass; traditional and progressive feature the banjo: https://en.wikipedia.org/wiki/Bluegrass_music :
> These divisions center on the longstanding debate about what constitutes "Bluegrass Music". A few traditional bluegrass musicians do not consider progressive bluegrass to truly be "bluegrass", some going so far as to suggest bluegrass must be [...]
Having enjoyed "electro blues", for some years I had been searching on YT for "electro bluegrass" without success but today it seems the genre is finally populated. (although from what I have found there's still plenty of opportunity to discover the proper cross between high lonesome and EDM)
Miniaturized technique to generate precise wavelengths of visible laser light
https://news.ycombinator.com/item?id=36580174 :
> "Universal visible emitters in nanoscale integrated photonics" (2023) https://opg.optica.org/optica/fulltext.cfm?uri=optica-10-7-8... :
>> Abstract: Visible wavelengths of light control the quantum matter of atoms and molecules and are foundational for quantum technologies, including computers, sensors, and clocks. The development of visible integrated photonics opens the possibility for scalable circuits with complex functionalities, advancing both science and technology frontiers. We experimentally demonstrate an inverse design approach based on the superposition of guided mode sources, allowing the generation and complete control of free-space radiation directly from within a single 150 nm layer Ta2O5, showing low loss across visible and near-infrared spectra [...]
Electrocaloric material makes refrigerant-free solid-state fridge scalable
This is just more efficient Peltier coolers right? Or is this some other effect?
Peltier effect generates spatially-separated hot and cold sides. Electrocaloric effect generates temporally-separated hot and cold periods. The Peltier effect is simpler to harness into a refrigeration unit (put the cold stuff on the cold side, dissipate heat from the hot side), but has lower potential efficiency.
Now we just need to combine it with thermal transistors* on the front and back sides to gate and pump the heat in one direction. Conduct -> Cool -> Insulate -> Heat -> Conduct -> Cool... (while doing the opposite on the heat-sinking side, of course)
(*from 3 weeks ago on HN) https://news.ycombinator.com/item?id=38259991
Perhaps a black hole could do some of the Heat phase, at least. From https://news.ycombinator.com/item?id=38450636 :
"Using black holes as rechargeable batteries and nuclear reactors" (2023) https://arxiv.org/abs/2210.10587
Is this the same one as last week?
https://news.ycombinator.com/item?id=38359089 :
> "High cooling performance in a double-loop electrocaloric heat pump" (2023) https://www.science.org/doi/10.1126/science.adi5477
> Electrocaloric effect: https://en.wikipedia.org/wiki/Electrocaloric_effect
Innovative Method to Efficiently Harvest Low-Grade Heat for Energy
"Enhancing Efficiency of Low-Grade Heat Harvesting by Structural Vibration Entropy in Thermally Regenerative Electrochemical Cycles" (2023) https://doi.org/10.1002/adma.202303199 :
> Abstract: The majority of waste-heat energy exists in the form of low-grade heat (<100 °C), which is immensely difficult to convert into usable energy using conventional energy-harvesting systems. Thermally regenerative electrochemical cycles (TREC), which integrate battery and thermal-energy-harvesting functionalities, are considered an attractive system for low-grade heat harvesting. Herein, the role of structural vibration modes in enhancing the efficacy of TREC systems is investigated. How changes in bonding covalency, influenced by the number of structural water molecules, impact the vibration modes is analyzed. It is discovered that even small amounts of water molecules can induce the A1g stretching mode of cyanide ligands with strong structural vibration energy, which significantly contributes to a larger temperature coefficient (ɑ) in a TREC system. Leveraging these insights, a highly efficient TREC system using a sodium-ion-based aqueous electrolyte is designed and implemented. This study provides valuable insights into the potential of TREC systems, offering a deeper understanding of the intrinsic properties of Prussian Blue analogs regulated by structural vibration modes. These insights open up new possibilities for enhancing the energy-harvesting capabilities of TREC systems.
Charlie Munger has died
One of the greats of investing. And a value investor. It's all about the profits, not the growth.
Munger is gone, Bogle is gone, Buffett is 93. Who takes up the mantle of value investing now?
You have to remember that Bogle/Munger/Buffet all gained prominence when value investing wasn't a thing and investing of any kind was wildly out of reach for the common man. Today anyone can go online and buy VTI in minutes. Every financial advisor and 401k plan recommends index funds by default, and it is how the vast majority of people and organizations store their wealth. It doesn't need any more cheerleaders or icons. It had simply become synonymous with investing at large.
Value investing is “an investment paradigm that involves investing in stocks that are overlooked by the market and are being traded below their true worth”.
Correct me if wrong, but I don’t think index funds come under that paradigm.
It depends on how the index is constructed. A market cap index cannot be value investing. A market sector index is almost surely not value investing (unless that entire sector is undervalued).
An index constructed specifically using value measures as the criteria for inclusion can be (at least arguably so).
Click on "value indexes" here: https://www.crsp.org/indexes/ to see some underlying value indexes, and funds like this one track the Large Cap version of it: https://fundresearch.fidelity.com/mutual-funds/summary/92290... (perhaps not surprising, the fund's largest holding is Berkshire B shares)
XBRL filings have the information needed to screen with value investing criteria. GFinance's old stock screener's UI was great.
https://github.com/openlink/Virtuoso-RDFIzer-Mapper-Scripts/...
/? query XBRL https://www.google.com/search?q=query+xbrl
https://github.com/topics/xbrl
But then also a fund or an index fund or an Index ETF wouldn't be complete without ethical review for the sustainable competitive advantage given e.g. GRI+#GlobalGoal sustainability reports.
When you own enough of a company to bring in a new team.
- [ ] ENH: pandas_datareader: add XBRL support from one or more APIs
https://pandas-datareader.readthedocs.io/en/latest/remote_da...
Microsoft open-sources ThreadX
This was "Azure RTOS", bought by Microsoft in haste after Amazon acquired FreeRTOS.
Bill Lamie left to start PX5 and work on a new lightweight embedded RTOS and took most of the talent with him. If Microsoft is doing this, they're pretty much walking away from their roadmap for Azure RTOS and IoT nodes along those lines.
I call it a win, ThreadX had a lot more ecosystem behind it than FreeRTOS ever did. And it does run on things other than Raspberry Pis. Renesas used to give it away for free if you bought their SoCs.
But are there devs for the acquired platform now?
The article says ThreadX used to be what Intel Management Engine ME ran on? How do I configure the ME / AMT VNC auth in there?
ThreadX has been around for 25 years. You licensed it and put it in your embedded system. It's in a lot of products.
But how many people agreed to sign an NDA in order to cluster fuzz such a critical low-level firmware binary blob component?
Is this like another baseband processor?
The development of ThreadX has nothing to do with all with RPi and VideoCore. It's a software component that was used to develop a larger architecture.
Is it like another Intel ME, phone modem firmware, etc? Absolutely!
Everything from your x64 CPU to microSD to credit cards(not the readers; readers run antiquated Android and soon known-bad Chromium) runs some form of weird slow proprietary RTOS. It is what it is. I bet it takes ~century with help of superhuman AI to make those run an open and verified code. The situation is improving too, slowly, because using buggy proprietary code is not a goal, but means.
It's ok to be disgusted about the status quo, but that is not necessarily worth your time; IIRC one of original complaints by RMS on the state of software freedom that lead to Free Software Movement was about some HP printer running buggy custom OS. Even the point people said enough is enough goes back that far.
Gosh, there's so much wrong here.
> weird slow proprietary
They're often not weird, a simple single task runner with a few libraries to handle common tasks and cryptographic operations. Very simple, lightweight, and they generally share a common high level architecture (there's not much variation in an RTOS)
They're often not slow, they're minimalist OSes - barely qualifying as an OS if at all - designed to run a single task, with time guarantees, and to get out of the way. In fact, if it's a single task you need to run, they're faster than any general purpose OS - by design!
They're often not proprietary - a handful of RTOS with huge market penetration used in billions of devices (and now ThreadX) - are open source and have permissive licenses. What IS often proprietary about them are BSPs, but that's a whole separate issue. Yes, there are a lot of proprietary ones out there, but as a blanket statement, it's simply not true.
> readers run antiquated Android
Many use a stripped down version of AOSP, which has become a de facto standard BSP, yes. But many, many others do not (usually a flavor of embedded linux, or an RTOS).
> about some HP printer running buggy custom OS
It was a Xerox printer, and it was because he was frustrated from adding existing job management and notification features he had written to the new printer.
The IME ran on Minix.
You've got that reversed. IME runs MINIX now, it used to run on ThreadX.
Linux (1991) started as a fork of MINIX (1987) by Tanenbaum.
History of Linux: https://en.wikipedia.org/wiki/History_of_Linux
MINIX: https://en.wikipedia.org/wiki/Minix
Redox OS: https://en.wikipedia.org/wiki/Redox_(operating_system) :
> Redox is a Unix-like microkernel operating system written in the programming language Rust, which has a focus on safety, stability, and performance. [4][5][6] Redox aims to be secure, usable, and free.
> Linux (1991) started as a fork of MINIX (1987) by Tanenbaum.
That is not true and never was.
It was in part bootstrapped on Minix but it contains no Minix code at all and was built with GNU tools.
No, that's definitely a fork (or a clone); fairly with significant differences, including the regression to a macrokernel.
That the MINIX code was replaced before release does not make it not a fork.
Nope.
Fork: take the existing code, make your own version and start modifying. That does not apply here.
Torvalds did not take any Minix code; one of the reasons he did his own was that the licence agreement on Minix prevented distribution of modified versions. At the time Freax/Linux got started, people were distributing patch sets to Minix to add 286 memory management, 386 handling and so on, because they could not distribute modified versions.
The Linux kernel started out as 100% new original code. I was there; I watched the mailing lists and the USEnet posts as it happened. It's the year I started paying for my own personal online account and email, after 4Y in the industry.
The origins of Torvalds' kernel were as a homegrown terminal emulator. He wanted it to be able to do downloads in the background while he worked in a different terminal session. This required timeslicing. He tried and found it was complicated, so he started implementing a very simple little program that time-sliced between 2 tasks, one printing "AAAAAA..." to the console and the other printing "BBBBB..."
This is all documented history, which it seems you have not read.
You are wrong.
Furthermore:
> including the regression to a macrokernel.
This indicates that you are not aware of the differences between Minix 1, 2 and 3.
Minix 3 (2005) is a microkernel.
Minix 1 (1987) was not and does not support an MMU. It runs on the 8086 and 68000 among other things.
Linux (1991) was originally a native 80386 OS, the first x86-32 device and a chip that was not yet on sale the year that Minix 1 was first published.
Summary:
Linux is not a fork of Minix and is unrelated to Minix code.
Intel Management Engine: https://en.wikipedia.org/wiki/Intel_Management_Engine
Designing a SIMD Algorithm from Scratch
SIMD is pretty intuitive if you’ve used deep learning libraries, NumPy, array based or even functional languages
NumPy roadmap: https://numpy.org/neps/roadmap.html :
> Improvements to NumPy’s performance are important to many users. We have focused this effort on Universal SIMD (see NEP 38 — Using SIMD optimization instructions for performance) intrinsics which provide nice improvements across various hardware platforms via an abstraction layer. The infrastructure is in place, and we welcome follow-on PRs to add SIMD support across all relevant NumPy functions
"NEP 38 — Using SIMD optimization instructions for performance" (2019) https://numpy.org/neps/nep-0038-SIMD-optimizations.html#nep3...
NumPy docs > CPU/SIMD Optimizations: https://numpy.org/doc/stable/reference/simd/index.html
std::simd: https://doc.rust-lang.org/std/simd/index.html
"Show HN: SimSIMD vs SciPy: How AVX-512 and SVE make SIMD nicer and ML 10x faster" (2023-10) https://news.ycombinator.com/item?id=37808036
"Standard library support for SIMD" (2023-10) https://discuss.python.org/t/standard-library-support-for-si...
Automatic vectorization > Techniques: https://en.wikipedia.org/wiki/Automatic_vectorization#Techni...
SIMD: Single instruction, multiple data: https://en.wikipedia.org/wiki/Single_instruction,_multiple_d...
Category:SIMD computing: https://en.wikipedia.org/wiki/Category:SIMD_computing
Vectorization: Introduction: https://news.ycombinator.com/item?id=36159017 :
> GPGPU > Vectorization, Stream Processing > Compute kernels: https://en.wikipedia.org/wiki/General-purpose_computing_on_g...
Model Correctly Predicts High-Temperature Superconducting Properties
"Superconductivity studied by solving ab initio low-energy effective Hamiltonians for carrier doped CaCuO2, Bi2Sr2CuO6, Bi2Sr2CaCu2O8, and HgBa2CuO4," https://link.aps.org/doi/10.1103/PhysRevX.13.041036
Building a Small REPL in Python
But then what about tests?
https://github.com/4dsolutions/python_camp/pull/4/files :
class TestCalculatorREPL(unittest.TestCase):
from unittest.mock import patch
@patch("sys.stdin", StringIO("1"))
@patch("sys.stdout", new_callable=StringIO)
def test__():
pass
Ooh, this is interesting. Thanks for the tip. The rest of the project is well-tested but I've been cowboy developing the REPL.
The `hanging-punctuation property` in CSS
I’ve been tending towards the opinion that features like this are a bad idea: that we might be better served by a general “this is prose” signal that triggers things like Knuth-Plass line-breaking, partially-hung punctuation (not full, please not full, it’s awful), conservative hyphenation, maybe even spacing and stretching tweaks to reduce the need of hyphenation (like the CTAN microtype package provides), whatever else the user agent supports. Bundle it all up in an all/auto/none property that defaults to auto, and let browsers choose heuristics for normal content. (OK, so I wouldn’t limit it quite that hard, I’d make it a little more like font-variant, but I do believe the default should be heuristic-based rather than disabled.)
> a general “this is prose” signal that triggers things like […] conservative hyphenation
At the least, that requires specifying the language of the text, probably even subtle differences such as those between UK and US english (although those can probably be handled very conservatively most of the time, as long words are fairly rare in English, compared to, say, Dutch or German)
`hyphens: auto` is already language-aware. (It’s just that currently it either does nothing or too much, because the first-fit line breaking algorithm is lousy for hyphenation.)
My toddler loves planes, so I built her a radar
This is cool in concept, but FlightRadar24 has a built-in Augmented Reality feature that works really well.
https://www.flightradar24.com/blog/show-us-your-best-augment...
Also, if I were to build my own local copy, I'd use an RTLSDR to get the ADSB packets direct and base my app on tar1090. https://github.com/wiedehopf/tar1090
MSFS (MS Flight Simulator) has real-time Flight and Weather data and works in Steam's Proton fork of WINE on Linux.
FWICS there are third-party open source tools for adding live Flight data and logical behaviors to flight simulator applications.
https://fslivetrafficliveries.com/user-guide/ :
> FSLTL is a free standalone real-time online traffic overhaul and VATSIM model-matching solution for MSFS.
(... Til about FlyPadOS3 EFB: An EFB is intended primarily for cockpit/flightdeck or cabin use. For large and turbine aircraft, FAR 91.503 requires the presence of navigational charts on the airplane. If an operator's sole source of navigational chart information is contained on an EFB, the operator must demonstrate the EFB will continue to operate throughout a decompression event, and thereafter, regardless of altitude. https://docs.flybywiresim.com/fbw-a32nx/feature-guides/flypa...)
https://twinfan.gitbook.io/livetraffic/ :
> LiveTraffic is a plugin for the flight simulator X-Plane to show real-life traffic, based on publicly available live flight data, as additional planes within X-Plane. [...]
> I spent an awful lot of time dealing with the inaccuracies of the data sources, see [Limitations]. There are only timestamps and positions. Heading and speed is point-in-time info but not a reliable vector to the next position. There is no information on pitch or bank angle, or on gear or flaps positions. There is no info where exactly a plane touched or left ground. There are several data feeders, which aren't in synch and contradict each other.
...
"Google Earth 3D Models Now Available as Open Standard (GlTF)" (2023) ; land, buildings: https://news.ycombinator.com/item?id=35896176
https://developers.google.com/maps/documentation/tile/3d-til... :
> Photorealistic 3D Tiles are a 3D mesh textured with high resolution imagery. They offer high-resolution 3D maps in many of the world's populated areas. They let you power next-generation, immersive 3D visualization experiences to [...]
GMaps WebGL overlay API: https://developers.google.com/maps/documentation/javascript/...
...
From "GraphCast: AI model for weather forecasting" (2023) https://news.ycombinator.com/item?id=38267794 :
> TIL about Raspberry-NOAA and pywws in researching and summarizing for a comment on "Nrsc5: Receive NRSC-5 digital radio stations using an RTL-SDR dongle" (2023) https://news.ycombinator.com/item?id=38158091
...
"Show HN: I wrote a multicopter simulation library in Python" (2023) https://news.ycombinator.com/item?id=38255362 :
> [ X-Plane Plane Maker, Juno: New Origins (and also Hello Engineer), MS Flight Simulator cockpits are built with MSFS Avionics Framework which is React-based, [Multi-objective gym + MuJoCo] for drone simulation, cfd and helicopters ]
...
"DroneAid: A Symbol Language and ML model for indicating needs to drones, planes" (2020) https://github.com/Code-and-Response/DroneAid https://news.ycombinator.com/item?id=22707347 ... https://github.com/Call-for-Code/Project-Catalog
Global talks to cut plastic waste stall as industry and environment groups clash
What are some of the solutions to plastic pollution?
(Edit)
What are some solutions to the internal and external costs of plastic production, distribution, consumption, and waste?
Solutions to cut down on pollution or clean up the environment for all the plastic pollution accumulated for decades?
Gated suppression of light-driven proton transport through graphene electrodes
Perhaps tangentially,
"Cheap proton batteries compete with lithium on energy density" (2023) https://news.ycombinator.com/item?id=36926123 :
"Enhancement of the performance of a proton battery" (2023) https://www.sciencedirect.com/science/article/abs/pii/S03787... :
> Abstract: The present paper reports on experiments to improve theoretical understanding of the basic processes underlying the operation of a ‘proton battery’ with activated carbon as a hydrogen storage electrode. Design changes to enhance energy storage capacity and power output have been identified and investigated experimentally. Key changes made were heating of the overall cell to 70 °C, and replacement of the oxygen-side gas diffusion layer with a much thinner titanium-fibre sheet. A very substantial increase in reversible hydrogen storage capacity to 2.23 wt%H (598 mAh g−1, 882 J g−1) was achieved. This capacity is nearly three times that of the earlier design, and more than double the highest electrochemical hydrogen storage using an acidic electrolyte previously reported. It is hypothesised that the main cause of the major gain in storage is an enhanced water formation reaction on the O-side through reduced flooding. In addition, an alternative mode of discharging a proton battery has been discovered that allows direct generation of hydrogen gas from the hydrogenated carbon material, by a ‘hydrogen-pump’ type of reaction. The hydrogen gas evolved is high purity, and thus may ultimately create opportunities for use of this storage technology in hydrogen supply chains for fuel cell vehicles. [And probably other applications as well]
(Edit) A combined proton battery + graphene proton hydrolysis unit should probably keep the hydrolysis module as a separate part for replaceability?
"Gate-controlled suppression of light-driven proton transport through graphene electrodes" (2023) https://www.nature.com/articles/s41467-023-42617-4 :
> Abstract: Recent experiments demonstrated that proton transport through graphene electrodes can be accelerated by over an order of magnitude with low intensity illumination. Here we show that this photo-effect can be suppressed for a tuneable fraction of the infra-red spectrum by applying a voltage bias. Using photocurrent measurements and Raman spectroscopy, we show that such fraction can be selected by tuning the Fermi energy of electrons in graphene with a bias, a phenomenon controlled by Pauli blocking of photo-excited electrons. These findings demonstrate a dependence between graphene’s electronic and proton transport properties and provide fundamental insights into molecularly thin electrode-electrolyte interfaces and their interaction with light.
"Graphene proton transport could revolutionize renewable energy" (2023) https://interestingengineering.com/science/graphene-proton-t... :
> Scientists have found a way to speed up proton transport across graphene using light. The innovation could open up new avenues to producing green hydrogen.
AI system self-organises to develop features of brains of complex organisms
If I understand this correctly, they added a property to a NN of distance, and set the training to penalize increased distance between nodes to simulate a physical constraint, under these conditions 'hubs' emerged which facilitated connections across distance, the other observation was that "the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations".
The work suggests that existing approaches to neural network architecture would benefit from more closely emulating the operation of the brain in this regard.
> existing approaches to neural network architecture would benefit from more closely emulating the operation of the brain in this regard.
From https://news.ycombinator.com/item?id=38334538#38336861 :
> Which NN architectures could be sufficient to simulate the entire human brain with spreading activation in 11 dimensions?
Vtracer: Next-Gen Raster-to-Vector Conversion
Maybe Facebook's Segment Anything could replace the first clustering step?
I had a similar idea the other day after fighting with inkscape tracing! The problem with auto tracing is lack of content awareness so it's just shapes and colors leading to strange objects that require lots of tinkering.
I'm going to try it: Use segment anything to get object masks, Trace each object separately and combine from there!
> Comparing to Potrace which only accept binarized inputs (Black & White pixmap), VTracer has an image processing pipeline which can handle colored high resolution scans. tl;dr: Potrace uses a O(n^2) fitting algorithm, whereas vtracer is entirely O(n).
What is the Big-O of the algorithm with Segment Anything or other segmentation approaches?
Potrace: https://en.wikipedia.org/wiki/Potrace
The Ctrl-L to Simplify Inkscape feature attempts to describe the same path with fewer points/bezier curves.
Could this approach also help with 3d digitization?
TIL about https://github.com/fogleman/primitive from "Comparison of raster-to-vector conversion software" https://en.wikipedia.org/wiki/Comparison_of_raster-to-vector... which does already list vtracer (2020)
visioncortex/vtracer: https://github.com/visioncortex/vtracer
Vector graphics https://en.wikipedia.org/wiki/Vector_graphics
Rotoscoping: https://en.wikipedia.org/wiki/Rotoscoping
Sprite (computer graphics) https://en.wikipedia.org/wiki/Sprite_(computer_graphics)
E.g. pygame-web can do SVG sprites; so that you don't have to do pixel art and sprite scaling just works.
2.5D: https://en.wikipedia.org/wiki/2.5D
3D scanning: https://en.wikipedia.org/wiki/3D_scanning
"Why Cities: Skylines 2 performs poorly" (2023) ... No AutoLOD Level of Depth https://news.ycombinator.com/item?id=38160089
Wavetale is a 3D game with extensive and visually impressive vector graphics.
AI is currently just glorified compression
Relevant paper: https://arxiv.org/abs/2311.13110
"White-Box Transformers via Sparse Rate Reduction" (2023) ; https://arxiv.org/abs/2311.13110 https://scholar.google.com/scholar?cites=1536453281127121652... :
> Abstract: In this paper, we contend that the objective of representation learning is to compress and transform the distribution of the data, say sets of tokens, towards a mixture of low-dimensional Gaussian distributions supported on incoherent subspaces. The quality of the final representation can be measured by a unified objective function called sparse rate reduction. From this perspective, popular deep networks such as transformers can be naturally viewed as realizing iterative schemes to optimize this objective incrementally. Particularly, we show that the standard transformer block can be derived from alternating optimization on complementary parts of this objective: the multi-head self-attention operator can be viewed as a gradient descent step to compress the token sets by minimizing their lossy coding rate, and the subsequent multi-layer perceptron can be viewed as attempting to sparsify the representation of the tokens. This leads to a family of white-box transformer-like deep network architectures which are mathematically fully interpretable. Despite their simplicity, experiments show that these networks indeed learn to optimize the designed objective: they compress and sparsify representations of large-scale real-world vision datasets such as ImageNet, and achieve performance very close to thoroughly engineered transformers such as ViT. Code is at [ https://github.com/Ma-Lab-Berkeley/CRATE.
"Bad numbers in the “gzip beats BERT” paper?" (2023) https://news.ycombinator.com/context?id=36766633
"78% MNIST accuracy using GZIP in under 10 lines of code" (2023) https://news.ycombinator.com/item?id=37583593
Ask HN: Name names and thank open source maintainers of small projects!
Are there some small open source projects you really love? Take some time to thank the maintainers today. If they have a discussion forum where they welcome feedback, go thank them there. If there's nothing like that you can find, do it here. Post a comment. That's all it takes!
Mention the project(s) you love, why you love them, name the authors/maintainers you want to thank and thank them!
Try to find examples of small open source projects that you benefit from. Utilities, libraries, services, games, amusement software, everything counts!
The big projects like curl, Linux, etc. get a lot of limelight from everyone. Let us try to appreciate the small projects too that you benefit. I'm sure the authors/maintainers will appreciate this gesture!
Happy Thanksgiving!
WebGL Water
The illustrations here could probably also be so modeled: https://physics.aps.org/articles/v16/196 https://news.ycombinator.com/item?id=38369731
Newer waveguide approaches for example with dual or additional beams could also be so visualized.
Three.js interactive webgl particle wave simulator: https://threejs.org/examples/webgl_points_waves.html
From https://news.ycombinator.com/item?id=38028794 re: a new ultrasound wave medical procedure:
> "Quantum light sees quantum sound: phonon/photon correlations" (2023) https://news.ycombinator.com/item?id=37793765 ; the photonic channel actually embeds the phononic field
Phonon: https://en.wikipedia.org/wiki/Phonon :
> Phonons can be thought of as quantized sound waves, similar to photons as quantized light waves.[2] However, photons are fundamental particles that can be individually detected, whereas phonons, being quasiparticles, are an emergent phenomenon. [3]
> The study of phonons is an important part of condensed matter physics. They play a major role in many of the physical properties of condensed matter systems, such as thermal conductivity and electrical conductivity, as well as in models of neutron scattering and related effects.
Electron behavior is also fluidic in Superfluids (e.g. Bose-Einstein Condensates).
SQS Superfluid Quantum Space
"Can we make a black hole? And if we could, what could we do with it?" (2022) https://news.ycombinator.com/item?id=31383784 :
> "Gravity as a fluid dynamic phenomenon in a superfluid quantum space. Fluid quantum gravity and relativity." (2017) :
> [...] Vorticity is interpreted as spin (a particle's internal motion). Due to non-zero, positive viscosity of the SQS, and to Bernoulli pressure, these vortices attract the surrounding quanta, pressure decreases and the consequent incoming flow of quanta lets arise a gravitational potential. This is called superfluid quantum gravity.
And it's n-body and fluidic.
Curl, Spin, and Vorticity;
Vorticity: https://en.wikipedia.org/wiki/Vorticity
From https://news.ycombinator.com/item?id=31049970 https://westurner.github.io/hnlog/#comment-31049970 ... CFD, jax-cfd, :
> Thus our best descriptions of emergent behavior in fluids (and chemicals and fields) must presumably be composed at least in part from quantum wave functions that e.g. Navier-Stokes also fit for; with a fitness function.
From "Light and gravitational waves don't arrive simultaneously" https://news.ycombinator.com/item?id=38056295 :
> TLDR; In SQS (Superfluid Quantum Space), Quantum gravity has fluid vortices with Gross-Pitaevskii, Bernoulli's, and IIUC so also Navier-Stokes; so Quantum CFD (Computational Fluid Dynamics).
Show HN: Neum AI – Open-source large-scale RAG framework
Over the last couple months we have been supporting developers in building large-scale RAG pipelines to process millions of pieces of data.
We documented our approach in an HN post (https://news.ycombinator.com/item?id=37824547) a couple weeks ago. Today, we are open sourcing the framework we have developed.
The framework focuses on RAG data pipelines and provides scale, reliability, and data synchronization capabilities out of the box.
For those newer to RAG, it is a technique to provide context to Large Language Models. It consists of grabbing pieces of information (i.e. pieces of news articles, papers, descriptions, etc.) and incorporating them into prompts to help contextualize the responses. The technique goes one level deeper in finding the right pieces of information to incorporate. The search for relevant information is done through the use of vector embeddings and vector databases.
Those pieces of news articles, papers, etc. are transformed into a vector embedding that represents the semantic meaning of the information. These vector representations are organized into indexes where we can quickly search for the pieces of information that most closely resembles (from a semantic perspective) a given question or query. For example, if I take news articles from this year, vectorize them, and add them to an index, I can quickly search for pieces of information about the US elections.
To help achieve this, the Neum AI framework features:
Starting with built-in data connectors for common data sources, embedding services and vector stores, the framework provides modularity to build data pipelines to your specification.
The connectors support pre-processing capabilities to define loading, chunking and selecting strategies to optimize content to be embedded. This also includes extracting metadata that is going to be associated to a given vector.
The generated pipelines support large scale jobs through a high throughput distributed architecture. The connectors allow you to parallelize tasks like downloading documents, processing them, generating embedding and ingesting data into the vector DB.
For data sources that might be continuously changing, the framework supports data scheduling and synchronization. This includes delta syncs where only new data is pulled.
Once data is transformed into a vector database, the framework supports querying of the data including hybrid search using the available metadata added during pre-processing. As part of the querying process, the framework provides capabilities to capture feedback on retrieved data as well as run evaluations against different pipeline configurations.
Try it out and if interested in chatting more about this shoot us an email founders@tryneum.com
DAIR.AI > Prompt Engineering Guide > Technics > Retrieval Augmented Generation (RAG) https://www.promptingguide.ai/techniques/rag
SEC charges Kraken for operating as an unregistered securities exchange
I feel like this won't end up going well for the SEC. This whole methodology of telling exchanges "I don't know, you figure it out" when they ask for clarity and then turning around and suing them for not figuring it out is extremely shaky legal ground.
Matt Levine I think put it best: if Bernie Madoff were to go to the SEC and ask them "I don't know how to run a Ponzi scheme legally, can you please update the guidelines to make it easier", of course the SEC is going to refuse. And the situation with cryptocurrencies seems to be broadly similar: there's actually a pretty clear answer as to what would need to be done to be fully above-the-board and legal, it's just not what the cryptocurrency people want, so they want the SEC to make it easier for them.
And given that we've seen exchange after exchange fail to perform basic tasks like "don't commingle customer funds," I have a hard time feeling any sympathy for cryptocurrency companies here.
> And given that we've seen exchange after exchange fail to perform basic tasks like "don't commingle customer funds," I have a hard time feeling any sympathy for cryptocurrency companies here.
There is nothing per se nefarious about co-mingling customer funds, provided that you are otherwise compliant with the law.
Banks, for instance, don't just co-mingle customer funds, they invest those funds on their own behalf and reap the profits for themselves. Sometimes a bank will share a portion of its profit with its customers, in the form of intest; more often, the bank pays little or no interest, and actually charges the customer fees. A bank will risk its customers' money, and its customers will pay for that privilege.
Kraken has been operating under the money transmitter licensing scheme for a decade, and like banks, money transmitters don't have any legal requirement to segregate customer funds—although they do have the responsibility to maintain sufficient cash balances or liquid investments to cover all of what they owe to customers.
Whether Kraken is breaking the law is something that will likely be decided by a court. The SEC asserts that Kraken has broken the law, but it is not up to the SEC to decide—it is up to the courts.
It is not illegal to operate a spot commodities exchange without approval from the SEC, unless those commodities are also the kinds of securities the trading of which require SEC approval. It is also not illegal to operate as an unlicensed broker-dealer of non-security commodities, or as an unlicensed custodian of non-security commodities. The SEC only has jurisdiction over securities.
It is not in the slightest bit clear yet that the crypto tokens for sale on Kraken are in fact securities of any kind. The SEC asserts that they are, but at this point it is just an assertion. The SEC will have to win in court.
> For what its worth, banks don't just co-mingle customer funds, they invest those funds on their own behalf and reap the profits for themselves.
And that is why banks are regulated, must register, follow certain rules etc.
> Kraken has been operating under the money transmitter licensing scheme for a decade
Yeah, but that doesn't give them a license to operate a securities exchange, or a bank. How many other money transmitters (that are not registered securities exchanges or banks) have 'tokens for sale' like Kraken?
> How many other money transmitters (that are not registered securities exchanges or banks) have 'tokens for sale' like Kraken?
Quite a few. In fact, all of the major centralized exchanges operating in the US are authorized to do so because they are licensed money transmitters (with the exception of some that might be operating under the NYS "Trust" licensing scheme), and none of them are broker-dealers regulated by the SEC, because until very recently the SEC has taken the position that broker-dealers are not allowed to sell crypto assets.
For years the money transmitter licensing scheme was understood to be the correct (and sufficient) licensing scheme under which a crypto exchange could legally operate in the United States. It is only since the beginning of the Biden administration that the SEC has taken the position that crypto exchanges have an obligation to register with the SEC. The Ripple lawsuit was filed at the tail end of the Trump administration (after the election), but it was not targeted at exchanges. It's also worth noting that the judge in the Ripple case found that exchange-traded XRP tokens are not securities. In other words, the SEC's assertion of authority over exchange-traded Ripple tokens was explicitly denied.
There is good reason to believe that SEC will be unsuccessful in asserting even broader authority over all tokens that are available on Kraken.
In order to justify such a specific claim against Kraken to be heard, isn't it necessary to first qualify that such assets are themselves securities?
Am I crazy to call this vexatious harassment?
If P then Q:
If {x,y,z} are securities, [ then {Exchanges a, B, and C} have provided securities exchange services of assets {x,y,z} without the requisite license are thus owe a civil fine. ]
But how is a suit against Exchange A the appropriate forum to hear whether assets {x, y, or z} are securities?
Given that - presumably - assets {x,y,z} are not yet ruled to be securities, there was not sufficient cause or standing to make a claim of bad faith or intent to provide exchange services for unregistered securities.
Exchange A operated in good faith, pursued the requisite state and federal procedures for assessing whether or not such assets were securities, and specifically does not intend to sell securities.
Should there be an is_this_a_security() function of a US government regulatory agency, defendants would be required to request such review before listing said specific types of assets.
Uhh, what? I've always considered Kraken to be the most rules-abiding exchange - more so than Coinbase. They're quite shrewd in what they're willing to list.
I wonder if SEC is charging Coinbase soon, too?
I mean, Kraken is not a registered securities exchange.
They offer many cryptocurrencies.
The SEC has indicated that it considers most, probably all, cryptocurrencies to a be securities.
The writing has been on the wall for a while -- Coinbase got a Wells letter, basically a "lawsuit is coming" warning, months ago.
The Coinbase Wells letter was in reference to their interest bearing offerings
Presumably Coinbase and Kraken had to register as banks to offer FDIC-insured accounts (and debit cards)?
Non-Security Deposits are interest-bearing products that are not securities.
Non-Security Deposits: CD Certificates of Deposit, MMA Money Market Accounts, Treasury Bills, Savings accounts, Checking Accounts
Do banks require SEC registration to offer interest-bearing Non-Security Deposit products?
Have banks ever been required to qualify interest-bearing products as securities contracts, after qualifying each product for list in each US State of operation?
They absolutely are not banks or none of this would be happening.
This also is about unregistered securities and not interest bearing accounts although that too is an issue because they're not banks.
edit: They're Money Services Businesses and that's it. They might have some state-level lending licenses but I'm fuzzy on that.
FDIC: https://en.wikipedia.org/wiki/Federal_Deposit_Insurance_Corp... :
> The Federal Deposit Insurance Corporation (FDIC) is a United States government corporation supplying deposit insurance to depositors in American commercial banks and savings banks.
https://www.sifma.org/resources/general/firms-guide-to-the-c.... :
> Any broker-dealer that is a member of a national securities exchange or Financial Industry Regulatory Authority (FINRA) and handles orders must report to CAT. Eligible securities include NMS stocks, listed options, and over-the-counter (OTC) equity securities.
Interledger Protocol works with any type of ledger, has a defined messaging spec, and has multi-hop audit trails: https://westurner.github.io/hnlog/#comment-36503888
Coinbase and Kraken are not registered as banks, and do not offer FDIC insured accounts.
> Do banks require SEC registration to offer interest-bearing Non-Security Deposit products?
No, because they are registered and regulated as banks.
Whether USD deposits have FDIC protection (250K (x2 *) or the balance of the account, whichever is lower; since the Great Recession [1] (before that it was 100K per account))
[1] https://en.wikipedia.org/wiki/Federal_Deposit_Insurance_Corp...
In 1999, GLBA [2] changed the 1933 Glass-Steagall rule [3] that had prevented banks from investing Savings deposits in order to ensure that they would have enough to prevent another run. (As depicted in "It's a Wonderful Life" (1946); Clarence the angel or Mr. Potter's Potterville)
I'm not sure that it's anywhere explicitly stated that the banks' socialist FDIC corporation justified allowing investing of savings deposits. They created a large shared prepaid credit line for themselves in order to operate safely.
Banks invest in non-securities; without any agreement for future performance.
Banks invest in treasuries, which are tokenizable non-security deposits.
(Some time later, the dotcom boom busted and the US went to war/oil/defense instead of clean energy (like the solar panels that were on the roof until 1980 (due to the oil crisis CPI hostage situation, when it became necessary to defensively meddle in the ME with blowback left for Obama to handle, and not pay for)))
[2] https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bl...
[3] https://en.wikipedia.org/wiki/Glass%E2%80%93Steagall_legisla...
https://help.coinbase.com/en/coinbase/other-topics/other/cli... :
> How is client cash stored at Coinbase? The vast majority of Coinbase client cash is stored in FDIC-insured bank accounts and U.S. government money market funds to keep it safe and liquid. Like all assets on Coinbase, we hold client cash 1:1 and your assets are your assets.
https://support.kraken.com/hc/en-us/articles/360001372126-Ar... :
> Are balances stored on Kraken insured? Cryptocurrency exchanges do not qualify for deposit insurance programs because exchanges are not savings institutions. Exchanges are not even meant to be cryptocurrency wallets.
https://www.investopedia.com/kraken-vs-coinbase-5120700 says that Kraken ended staking services in the US in February 2023.
There is yet no FDIC protection for any stablecoin, and yet no CBDC (just FedNow), but US banks are specifically allowed to provide crypto custody services.
Show HN: New visual language for teaching kids to code
Pickcode is a new language and editor for getting kids started with coding. The code editing experience is totally structured, where you select choices from menus rather than typing. I made Pickcode after experiences teaching kids both block coding (Scratch, App Inventor) and Python. To me, block coding is too far removed from regular coding for kids to make the connection. Pickcode provides a much clearer transition path for students to Python/JS/Java. Our target market is middle/early high school kids, and that’s who we’ve tested the product with during development.
On the site, you can do tutorials to make chatbots, animated drawings, and 2D games. We have a full Intro to Pickcode course, as well as an Intro to Python course where you make regular console programs with a regular text editor. There are 30 or so free lessons accessible with an account, and the rest are paywalled for $5/month.
For professional programmers, the editor is probably pretty frustrating to use (no vim keybindings!), but I hope it’s at least interesting to play with from a UI perspective. If you have kids aged 10-14, I’d love any feedback you have from trying it out with them. I love talking to users, reach out at charlie@pickcode.io!
awesome-python-in-education > Interactive Environments: https://github.com/quobit/awesome-python-in-education#intera...
'Electrocaloric' heat pump could transform air conditioning
> The use of environmentally damaging gases in air conditioners and refrigerators could become redundant if a new kind of heat pump lives up to its promise. A prototype, described in a study published last week in Science [1], uses electric fields and a special ceramic instead of alternately vaporizing a refrigerant fluid and condensing it with a compressor to warm or cool air.
"High cooling performance in a double-loop electrocaloric heat pump" (2023) https://www.science.org/doi/10.1126/science.adi5477
[deleted]
Electrocaloric effect: https://en.wikipedia.org/wiki/Electrocaloric_effect
An equation co-written with AI reveals monster rogue waves form 'all the time'
> Until the new study, many experts believed the majority of rogue waves formed when two waves combined into a single, massive mountain of water. Based on the new equation, however, it appears the biggest influence is owed to “linear superposition.” First documented in the 1700’s, such situations occur when two wave systems cross paths and reinforce one another, instead of combining. This increases the likelihood of forming massive waves’ high crests and deep troughs. Although understood to exist for hundreds of years, the new dataset offers concrete support for the phenomenon and its effects on wave patterns.
AI Proxy to swap in any LLM while using OpenAI's SDK
How does this compare to LocalAI? https://github.com/mudler/LocalAI
The AI Proxy from the post is for using multiple local and remote LLMs over the OpenAI API (along with API key management apparently), while LocalAI is only for using local LLMs.
promptfoo and ChainForge do multi-LLM comparisons and benchmarking: https://news.ycombinator.com/item?id=37447885
NVK reaches Vulkan 1.0 conformance
Can anyone ELI5 this for me?
Does this mean NVidia GPUs don't need the proprietary driver anymore? Does this put NVidia on par with Radeon via amdgpu/RADV regarding OSS Linux support?
Yes, this is indeed a replacement for the proprietary driver. However, Vulkan 1.0 is the most basic version of Vulkan. Right now we are at 1.3, plus there are many Vulkan extensions which need to be implemented aside from the core version.
In other words, this is a good start, but with only Vulkan 1.0 you won't be able to use something like DXVK, for running DirectX games with Proton/Wine.
Vulkan > History: https://en.wikipedia.org/wiki/Vulkan
Ask HN: What might Aaron Schwartz have said about AI today?
I only had the pleasure of meeting Aaron once, at the YC open house after the first startup school in Cambridge. I was pitching sort of a competitor to Infogami and he helpfully whipped out his Sidekick and showed me a bunch of stuff. For the first and only time in my life, I was immediately struck by the thought of “now this is a kid who understands things.” His later work only reaffirmed my view and, though I could only watch from afar, he was critical to building a different kind of world. His blog was always insightful and a source of value.
Often, since the announcement of ChatGPT, I’ve wondered “what might Aaron have thought about this?”
Perhaps those of you who had the good fortune to know him better might share anything he might have said about AI or knowledge silos or the nature of information or free will or anything related?
Trying to find a link to the story of Aaron et. al (with declared intent) generating fraudulent ScholarlyArticles, submitting them to journals, and measuring the journal acceptance rate.
I see US vs Aaron, but no link to the SchoarlyArticle about - was it markov chains in like 2007 - submission of ScholarlyArticles and journal acceptance rates.
I mean, a reddit submission with markdown from nbconvert is basically a ScholarlyArticle if there's review and an IRB or similar.
Ask HN: What's the state of the art for drawing math diagrams online?
I'm interested in having high-quality math diagrams on a personal website. I want the quality to be comparable to TikZ, but the workflows are cumbersome and it doesn't integrate with MathJax/KateX.
Ideally I would be able to produce the diagrams in JS with KaTeX handling rendering the labels, but this doesn't seem to exist (I'm a software engineer so I'm wondering if I should try to make it...). Nice features also include having the diagram being controllable by JS or animatable, but that's not a requirement.
What are other people using?
Things I've considered:
TikZ options:
* TikZ exported to SVG
* Writing the TikZ in something else, e.g. I found this library PyTikZ which is old but I could update things to it, that way at least I don't have to wrangle TikZ's horrible syntax much myself. I could theoretically write a JS version of this.
* Maybe the same thing, JS -> TikZ, but also run TikZ in WebAssembly so that the whole thing lives in the browser.
* Writing TikZ but ... having ChatGPT do it so I don't have to learn to antiquated syntax.
Non-TikZ options:
* InkScape
* JSXGraph, but it isn't very pretty
* ???
Thanks for your help!
Somewhat related but focused on animated diagrams, Manim: https://github.com/3b1b/manim From 3Brown1Blue, used in their maths videos.
Also maybe Mathbox. https://github.com/unconed/mathbox From Steve Wittens / Acko.net. ( See also https://acko.net/blog/mathbox2/ )
Manim-sideview VSCode extension w/ snippets and live preview: https://github.com/Rickaym/Manim-Sideview
Manim example gallery: https://docs.manim.community/en/stable/examples.html
From https://news.ycombinator.com/item?id=38019102 re: Animated AI, ManimML:
> Manim, Blender, ipyblender, PhysX, o3de, [FEM, CFD, [thermal, fluidic,] engineering]: https://github.com/ManimCommunity/manim/issues/3362
It actually looks like pygame-web (pygbag) supports panda3d and harfang in WASM, too; so manim with pygame for the web.
What we need is a modernized toolchain for Asymptote[0] that can run in the browser in realtime like MathJax, it has much nicer syntax than TikZ.
ipython-asymptote [1][2] probably supports Jupyter Retro (now built on the same components as JupyterLab) but not yet JupyterLite with the pyodide WASM kernel:
emscripten-forge builds things with emscripten to WASM packages. [4]
JupyterLite supports micropip (`import micropip; await micropip.install(["pandas",])`).
Does micromamba work in JupyterLite notebooks?
"DOC: How to work with emscripten-forge in JupyterLite" https://github.com/emscripten-forge/recipes/issues/699
[1] https://github.com/jrjohansson/ipython-asymptote/tree/master
[2] examples: https://notebook.community/jrjohansson/ipython-asymptote/exa...
LLMs cannot find reasoning errors, but can correct them
If this is the case, then just run it X times till error rate drops near 0. AGI solved.
This is called (Algorithmic) Convergence; does the model stably converge upon one answer which it believes is most correct? After how much resources and time?
Convergence (evolutionary computing) https://en.wikipedia.org/wiki/Convergence_(evolutionary_comp...
Convergence (disambiguation) > Science, technology, and mathematics https://en.wikipedia.org/wiki/Convergence#Science,_technolog...
OpenAI staff threaten to quit unless board resigns
If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.
Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.
So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.
[1] https://stratechery.com/2023/openais-misalignment-and-micros...
This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.
https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...
They could make ChatGPT++
“Microsoft Chat 365”
Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.
At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.
And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?
Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.
It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.
One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :
> [...]
> Premise 1: LLMs can realistically "mimic" human communication.
> Premise 2: LLMs are trained on massive amounts of text data.
> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.
"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional
Does it do logical reasoning or inference before presenting text to the user?
That's a lot of waste heat.
(Edit) with next word prediction just is it,
"LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285
"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486
[deleted]
The human brain builds structures in 11 dimensions, discover scientists
Which NN architectures could be sufficient to simulate the entire human brain with spreading activation in 11 dimensions?
- citing the same paper: https://news.ycombinator.com/item?id=18218504
NetworkX does clique identification [1] in memory, and it looks like CuGraph does not yet have a parallel implementation [2]
[1] https://networkx.org/documentation/stable/reference/algorith...
[2] CuGraph docs > List of Supported and Planned Algorithms: https://docs.rapids.ai/api/cugraph/stable/graph_support/algo...
Cryptographers solve decades-old privacy problem
It's an exciting time to be working in homomorphic encryption!
Homomorphic encryption and zero knowledge proofs are the most exciting technologies for me for the past bunch of years (assuming they work (I'm not qualified enough to know)).
Having third parties compute on encrypted, private data, and return results without being able to know the inputs or outputs is pretty amazing.
Can you give an example of a useful computation someone would do against encrypted data?
Trying to understand where this would come in handy.
Grading students' notebooks on their own computers without giving the answers away.
https://news.ycombinator.com/item?id=37981190 :
> How can they be sure what's using their CPU?
Firefox, Chrome: <Shift>+<Escape> to open about:processes
Chromebook: <Search>+<Escape> to open Task Manager
An automatic indexing system for Postgres
> A fundamental decision we've made for the pganalyze Indexing Engine is that we break down queries into smaller parts we call "scans". Scans are always on a single table, and you may be familiar with this concept from reading an EXPLAIN plan. For example, in an EXPLAIN plan you could see a Sequential Scan or Index Scan, both representing a different scan method for the same scan on a given table.
Sequential scan == Full table scan: https://en.wikipedia.org/wiki/Full_table_scan
Yes, and a neat thing about indexes: sometimes it’s faster to do a sequential scan than load an index into memory.
[deleted]
Show HN: Open-source tool for creating courses like Duolingo
I'm launching UneeBee, an open-source tool for creating interactive courses like Duolingo:
GitHub repo: https://github.com/zoonk/uneebee Demo: https://app.uneebee.com/
It's pretty early-stage, so there's a lot of things to improve. Everything on this project is going to be public, so you can check the roadmap on GitHub too: https://github.com/orgs/zoonk/projects/11
I'm creating this project because I love Duolingo and I wanted the same kind of experience to learn other things as well.
But I think this could be useful to other people too. I'll soon launch three products using UneeBee:
- Wikaro: Focused on enterprise. It allow companies to have their own white-label Duolingo. I think this is going to be great for onboarding and internal training.
- Educasso: Focused on schools. It will allow teachers to easily create interactive lessons, compliant to local school curriculum. I want to make it in a way that saves teacher's time, so they focus more on their students rather than lesson planning.
- Wisek: Marketplace for interactive courses where creators will be able to earn money creating those courses.
I'm not sure this is going to work out but, worst case scenario, I'll have products that I can use myself because I'm a terrible learner using traditional ways. Interactive learning is super useful to me, so I hope it will be to other people too.
If you have some spare time, please give me your brutal feedback. I really want to improve this product, so no need to be nice - just let me know your thoughts. :)
PS. I'm also launching it on Product Hunt: https://www.producthunt.com/posts/uneebee
Notes from for LitNerd (YC S21) re: IPA, "Duolingo's language notes all on one page", Sozo's vowel and consonant videos, Captionpop synced YouTube videos with subtitles in multiple languages,: https://news.ycombinator.com/item?id=28309645
Spaced repetition and Active recall testing like or with Mnemosyne or Anki probably boost language retention like they increase flashcard recall: https://en.wikipedia.org/wiki/Anki_(software)
ENH: Generate Anki decks with {IPA symbols, Greek letters w/ LaTeX for math and science, Nonregional (Midland American) English, }
Google translate has IPA for some languages.
"The English Pronunciation / International Phonetic Alphabet Anki Deck" https://www.towerofbabelfish.com/ipa-anki-deck/
"IPA Spanish & English Vowels & Consonants" https://ankiweb.net/shared/info/3170059448
I don't know Elixir and so the hypothetical contribution barrier for nontrivial commits includes learning Erlang / Elixir.
The LearnXinYminutes tuts are succinct and on github for PRs to fix typos, language learning sequence reorderings, and or additions with comments
LearnXinYMinutes > Elixir: https://learnxinyminutes.com/docs/elixir/
FWIW, elixir is now my favorite language. Learning elixir is one of the things I thank past self for doing!
How the gas turbine conquered the electric power industry
Nit pick:
The article explains why many years have passed from the development of efficient steam turbines until the development of efficient gas turbines, due to the differences between the Rankine Cycle and the Brayton cycle.
Even if the Americans like to name the Joule cycle as the Brayton cycle, the American name does not have any justification.
While George B. Brayton has patented his engine in 1872, James Prescott Joule has published already in 1851, 21 years earlier, a scientific paper (“On the Air-Engine”) describing what is named now as the Joule cycle, a.k.a. the Brayton cycle.
While the Brayton patent contained very little information, the paper published by Joule was very important and it described in great detail how to design an engine using this thermodynamic cycle and why this is useful.
Moreover, in 1859, 13 years before Brayton, William John Macquorn Rankine has published a very influential manual (“A Manual of the Steam Engine and other Prime Movers”), where all the thermodynamic cycles used in engines known at that time were classified, and he already referred to this cycle as Joule’s cycle.
Therefore there is no doubt about the priority of the term "Joule cycle" over "Brayton cycle".
An interesting fact that is usually not mentioned in most manuals is that at equal maximum temperature and maximum pressure (which are typically limited by the materials used for the engine), the Joule cycle is more efficient than either the Atkinson cycle or the Otto cycle, so the fact that it is the easiest thermodynamic cycle to approximate in a gas turbine is favorable for its efficiency.
https://news.ycombinator.com/item?id=33431427 :
> FWIU, heat engines are useful with all thermal gradients: pipes, engines, probably solar panels and attics; "MIT’s new heat engine beats a steam turbine in efficiency" (2022) https://www.freethink.com/environment/heat-engine
"Thermophotovoltaic efficiency of 40%" (2022) https://www.nature.com/articles/s41586-022-04473-y https://scholar.google.com/scholar?cites=1419736444024563175...
"Capturing Light From Heat at 40% Efficiency, NREL Makes Big Strides in Thermophotovoltaics" (2022) https://www.nrel.gov/news/program/2022/capturing-light-from-.... :
> The 41%-efficient TPV device is a tandem cell—a photovoltaic device built out of two light-absorbing layers stacked on top of each other and each optimized to absorb slightly different wavelengths of light. The team achieved this record efficiency through the usage of high-performance cells optimized to absorb higher-energy infrared light when compared to past TPV designs. This design builds on previous work from the NREL team.
> Another crucial design feature leading to the high efficiency is a highly reflective gold mirror at the back of the cell. Much of the emitted infrared light has a longer (less energetic) wavelength than what the cell's active layers can absorb. This back surface reflector bounces 93% of that unabsorbed light back to the emitter, where it is reabsorbed and reemitted, improving the overall efficiency of the system. Further improvements to the reflectance of the back reflector could drive future TPV efficiencies close to or above 50%.
Thermoelectric effect: https://en.wikipedia.org/wiki/Thermoelectric_effect
Thermophotovoltaic energy conversion *: https://en.wikipedia.org/wiki/Thermophotovoltaic_energy_conv...
Thermophotonics: https://en.wikipedia.org/wiki/Thermophotonics
Gas turbine: https://en.wikipedia.org/wiki/Gas_turbine :
> gross thermal efficiency exceeds 60%. [100] (2011)
GE-7HA https://www.ge.com/news/press-releases/ha-technology-now-ava... (2017) :
> that its largest and most efficient gas turbine, the HA, is now available at more than 64 percent efficiency in combined cycle power plants, higher than any other competing technology today.
How do TPV operating and lifecycle costs differ from gas turbine's costs?
TODO; though/also - after a gas turbine or solid-state TPV cell array - you have to store electricity, which is lossy and inefficient:
An electric motor's efficiency is not necessarily the same as its generator efficiency in reverse.
Gravitational Potential Energy
CAES Compressed Air Energy Storage:
Solar thermal energy > https://en.wikipedia.org/wiki/Solar_thermal_energy :
> Electrical conversion efficiency: Of all of these technologies the solar dish/Stirling engine has the highest energy efficiency. A single solar dish-Stirling engine installed at Sandia National Laboratories National Solar Thermal Test Facility (NSTTF) produces as much as 25 kW of electricity, with a conversion efficiency of 31.25%. [66]
Szilard-Chalmers MOST process: https://news.ycombinator.com/item?id=34027647 ...18 years at what conversion efficiency?
A PCIe Coral TPU Finally Works on Raspberry Pi 5
An HBM3E HAT would or would not yet make TPUs more useful with a Raspberry Pi 5?
Jetson Nano (~$149)
Orin Nano (~$499, 32 tensor cores, 40 TOPS)
AGX Orin (200-275 TOPS)
NVIDIA Jetson > Origins: https://en.wikipedia.org/wiki/Nvidia_Jetson#Versions
TOPS for NVIDIA [Orin] Nano [AGX] https://connecttech.com/jetson/jetson-module-comparison/
Coral Mini-PCIe ($25; ? tensor cores, 4 TOPS (int8); 2 TOPS per watt)
TPUv5 (393 TOPS)
Tensor Processing Unit (TPU) https://en.wikipedia.org/wiki/Tensor_Processing_Unit
AI Accelerator > Nomenclature: https://en.wikipedia.org/wiki/AI_accelerator
NVIDIA DLSS > Architecture: https://en.wikipedia.org/wiki/Deep_learning_super_sampling#A... :
> DLSS is only available on GeForce RTX 20, GeForce RTX 30, GeForce RTX 40, and Quadro RTX series of video cards, using dedicated AI accelerators called Tensor Cores. [23][28] Tensor Cores are available since the Nvidia Volta GPU microarchitecture, which was first used on the Tesla V100 line of products.[29] They are used for doing fused multiply-add (FMA) operations that are used extensively in neural network calculations for applying a large series of multiplications on weights, followed by the addition of a bias. Tensor cores can operate on FP16, INT8, INT4, and INT1 data types.
Vision processing unit: https://en.wikipedia.org/wiki/Vision_processing_unit
Versatile Processor Unit (VPU)
Wikidata, with 12B facts, can ground LLMs to improve their factuality
Can it though?
LLM's are currently trained on actual language patterns, and pick up facts that are repeated consistently, not one-off things -- and within all sorts of different contexts.
Adding a bunch of unnatural "From Wikidata, <noun> <verb> <noun>" sentences to the training data, severed from any kind of context, seems like it would run the risk of:
- Not increasing factual accuracy because there isn't enough repetition of them
- Not increasing factual accuracy because these facts aren't being repeated consistently across other contexts, so they result in a walled-off part of the model that doesn't affect normal writing
- And if they are massively repeated, all sorts of problems with overtraining and learning exact sentences rather than the conceptual content
- Either way, introducing linguistic confusion to the LLM, thinking that making long lists of "From Wikidata, ..." is a normal way of talking
If this is a technique that actually works, I'll believe it when I see it.
(Not to mention the fact that I don't think most of the stuff people are asking LLM's for isn't stuff represented in Wikidata. Wikidata-type facts are already pretty decently handled by regular Google.)
Well that's not actually how it works - they are just getting a model (WikiSP & EntityLinker) to write a query that responds with the fact from Wikidata. Did you read the post or just the headline?
Besides, let's not forget that humans are also trained on language data, and although humans can also be wrong, if a human memorised all of Wikidata (by reading sentences/facts in 'training data') it would be pretty good in a pub-quiz.
Also, we obviously can't see anything inside how OpenAI train GPT, but I wouldn't be surprised if sources with a higher authority (e.g. wikidata) can be given a higher weight in the training data, and also if sources such as wikidata could be used with reinforcement learning to ensure that answers within the dataset are 'correctly' answered without hallucination.
Ah, I did misunderstand how it worked, thanks -- I was looking at the flow chart and just focusing on the part that said "From Wikidata, the filming location of 'A Bronx Tale' includes New Jersey and New York" that had an arrow feeding it into GTP-3...
I'm not really sure how useful something this simple is, then. If it's not actually improving the factual accuracy in the training of the model itself, it's really just a hack that makes the whole system even harder to reason about.
The objectively true data part?
Also there's Retrieval Augmented Generation (RAG) https://www.promptingguide.ai/techniques/rag :
> For more complex and knowledge-intensive tasks, it's possible to build a language model-based system that accesses external knowledge sources to complete tasks. This enables more factual consistency, improves reliability of the generated responses, and helps to mitigate the problem of "hallucination".
> Meta AI researchers introduced a method called Retrieval Augmented Generation (RAG) to address such knowledge-intensive tasks. RAG combines an information retrieval component with a text generator model. RAG can be fine-tuned and its internal knowledge can be modified in an efficient manner and without needing retraining of the entire model.
> RAG takes an input and retrieves a set of relevant/supporting documents given a source (e.g., Wikipedia). The documents are concatenated as context with the original input prompt and fed to the text generator which produces the final output. This makes RAG adaptive for situations where facts could evolve over time. This is very useful as LLMs's parametric knowledge is static.
> RAG allows language models to bypass retraining, enabling access to the latest information for generating reliable outputs via retrieval-based generation.
> Lewis et al., (2021) proposed a general-purpose fine-tuning recipe for RAG. A pre-trained seq2seq model is used as the parametric memory and a dense vector index of Wikipedia is used as non-parametric memory (accessed using a neural pre-trained retriever). [...]
> RAG performs strong on several benchmarks such as Natural Questions, WebQuestions, and CuratedTrec. RAG generates responses that are more factual, specific, and diverse when tested on MS-MARCO and Jeopardy questions. RAG also improves results on FEVER fact verification.
> This shows the potential of RAG as a viable option for enhancing outputs of language models in knowledge-intensive tasks.
So, with various methods, I think having ground facts in the process somehow should improve accuracy.
[deleted]
Is Delaware the cheapest place to incorporate?
I am living in Taiwan and want to create a startup. The business will be mostly open source and likely to have low to no revenue.
I see that US states like Colorado have no franchise tax. But I also saw posts here that Delaware is usually ultimately cheaper.
What is the recommendation for a company to manage an open source project? Sure it might be worth money, but likely not, so I would like to keep money tight.
thanks!
There are many Open Source Software foundations that specialize in stewarding open source intellectual property and also open governance. Linux Foundation, Apache Software Foundation, [...]
Very few software licenses accept liability, including open source software licenses. Is that conscionable? Service Level Agreements (99% uptime and ZenDesk email-in customer support or better etc) cost money.
E.g. LegalZoom (no affiliation) has affiliate attorneys in many states, including Delaware.
It may or may not be common for open source software projects to register their trademark and/or DBA (Doing Business As) in each state: of operation, of labor law applicability (especially if there are remote workers).
GitHub (now Microsoft owned) supports FUNDING.yml files to display sponsor buttons for projects: https://docs.github.com/en/repositories/managing-your-reposi...
"Sponsors is expanding" (2023-10) https://github.blog/2023-10-03-sponsors-is-expanding/ :
> GitHub Sponsors now supports 103 regions!
E.g. WebMonetization.org supports the W3C Interledger spec (ILP Protocol), which can connect traditional and digital asset ledgers. GitHub supports a number of ~payments/donations providers but not yet any w/ Interledger FWICS?
> Did you know? We recently launched the ability for self-serve enterprise customers to allow member organizations to easily create sponsorships. Today, more than nine in 10 companies use open source software in at least some capacity. Knowing this, we enhanced our invoice process for organizations, making it easier for organizations to sign up and request invoicing as a payment method for sponsorships.
> Additionally, we are making it easier for self-serve enterprise customers to grant their member organization permission to create sponsorships
From the GH Sponsors FAQ re a Matching Fund https://github.blog/2019-06-12-faq-with-the-github-sponsors-... :
> Can’t people just steal money from the matching fund?: We have a rigorous vetting process for the sponsored developers who receive the match. If you happened to see the application form at github.com/sponsors, you’ll notice we ask a lot of questions that support this process. We’re also introducing more measures—including an extensive identity verification and antifraud program in partnership with Stripe—as we grow the program this summer.
YouTube may face criminal complaints in EU for using ad-block detection scripts
Why do you pretend that you are entitled to free video storage and bandwidth from YouTube? You haven't been sleighted. That service costs money to provide.
No, you don't have a right to free service either. Do you pay your other bills while you demand free rendered services from these companies?
Having a disability or similar does not entitle you to an unsubsidized Times Square with no ads.
(NASA is running their own streaming network and competing; it can be done. EU should try to run competing free video streaming businesses before shoving preferred American companies around with anti-competitive claims. EU haven't run a video hosting business that's been prevented from competing by the success, existence, and approved mergers and acquisitions of American media companies; and so EU video hosting businesses haven't and can't have been anti-conpetitively disadvantaged.)
I also run ad blockers for various justifiable reasons; but I don't tell myself that I have a right to free shtuff.
How the heck can you require only Netflix to host 30% local EU content and also demand free video streaming service with no ads?
> Why do you pretend that you are entitled to free video storage and bandwidth from YouTube?
Because they offered it for free.
YouTube can close the doors any time. If they want my money, they can make a service offering that meets my needs. They could charge content providers for bandwidth and storage and meter it with assisted ad-support networks. They could charge a price I'm willing to pay.
But they don't and I will not accept any argument that consuming resources they put into the public sphere for free use means I am under any moral obligation to either give them money or facilitate them making money off of my traffic.
The only time ads worked was when Google made them an unobtrusive part of search. They dominate literally every piece of software I use now. I'm sorry but I say burn it all to the ground. I will either pay for or build its replacement.
You don't want ad-blockers? Shut it down. I was doing the internet before there was a need for them.
> Because they offered it for free.
The Information Service Provider offered it for "free with ads".
Do you otherwise support paying creators for their work, if not through YouTube's system for compensating creators?
My Service Provider License Agreement states that I am able to circumvent any ad technology when using my own devices. It is free. The "with ads" part is someone else's opinion.
If you don't want ad-blockers, shut it down.
You're not going to shut them down, it sounds like they shut you down. Just pony up the $100/year. It's the cheapest television has ever been. This is hacker news, not hobo news. If you've been using the Internet this long, then you should remember the outrageous amounts of money people were paying for cable television back in the 90's. And that still had ads.
Most content creators are either doing it for free, or getting a percentile of a cent on the dollar. As far as I'm concerned, that makes it communication infrastructure, not service provision. Youtube is providing the infrastructure, the creators are providing the service.
As I think that infrastructure should be publically owned, I'm happy to do my bit for nationalization, and use adblock.
Youtube may very well charge for whatever they want. But pretending to offer the service for free and looking at your information (in the form of browser capabilities and other PII), is like you entering a store and the owner going through your wallet just because you want to browse around
I really want to understand the psychology of the people who show up and comment like the OP did.
Does he work for Google? For Youtube? More than a few people here on HN must, right? But there are so many like him, it can't be the explanation all the time. Does he worship big tech companies to the point that if he shills for them, he believes that career success will rain down on him, like some sort of occupational cargo culting thing? Is he some amateur Deviant Art person, who shills for obscene copyright maximalism for similar reasons? Is it just that he does the one sort of 4chan-level shitposting that people can get away with on HackerNews, for the same reason the 4chan people do their thing there (whatever that is)?
It's weird. It's both the one topic you can see that stuff here on, where it isn't wished away to the cornfield. And the people doing it seem confident they can get away with it.
> the psychology of
Here's this: "I hate ads. My time is valuable. I want free things. What I do is justifiable. Especially when you consider moral relativism and their practices."
And this: "I hate war, but the corporate media of the 2000s sold it to me and then didn't pay the bill; so we need citizen media."
I don't think we're entitled to free video streaming; maybe because I remember how much it costs to host an .mp4 without HLS on a shared hosting account without nginx-rtmp-module (C and ffmpeg, not yet Rust) and how long it takes to encode video without buying custom hardware video encoding accelerator cards and low-volume ASIC/FPGA TLS load balancer accelerators because now video over HTTPS, and because I don't want to pull media from your MediaGoblin tube site.
I don't support artists suing fans listening from the streets.
Artists are entitled to proceeds from their work if that's how they want to run the show.
I do support paying artists with the audio fingerprinting that YouTube pioneered.
(As an artist and a visual artist - it doesn't matter what kind - I don't want to ruin YouTube with payout demands; but if musical artists are due their cut for their plays, then visual artists are too. No musical artists have yet stood up for the plight of visual artists. Nobody has yet determined how to pay everyone on a production with a smart contract that gives them their fair cut for their contribution to the collaborative ari project.)
It costs money to encode, host, and moderate video, live video, comments, and live chats.
It costs money to stream video.
Good content costs money to create, in an ad hominem-d influencer-affected landscape devoid of critical thinking and media literacy.
Artists don't get paid when you stream for free without ads or premium subscription.
How can we pay artists and content producers and privately bootstrapped infrastructure if the marginal cost of a stream is not offset by the marginal returns of a stream?
Content creators have real costs.
--
Copyleft is my decision as an artist. Open Source is my decision as a developer who can't donate their services to charity.
--
Anti-competitive anti-competition context:
What's not fair, anticompetitively? Tying, Bundling, Exclusive agreements, Price fixing, Colluding cartels (for non-essential commodities), Bribery, Kickbacks, Becoming a lobbyist without waiting a fair amount of time first,
What is fair? Selling to the highest bidder. Approved mergers and acquisitions. Strategies against hostile corporate takeover. Taking the bank's money and your creditworthiness and bootstrapping. Penny-pinching to scale and gain market share. Appeasing shareholders/owners. Charging people when they use hours of free services per week.
I'm interested that music is your hook here and ensuring artists are paid for their work (which I also agree with), but you stated before that you also use ad blockers, so how do you support those non video/audio content producers whose written or other media you consume with ad blockers? (I assume you agree that content creation is also a worthwhile endeavour, otherwise you wouldn't be bothering to read the online content that requires you to use ad blocking.)
What is different to you between these scenarios?
I can otherwise support artists and their lifestyles or not by subscribing, buying their music, tickets to live shows, jam cruise, merch, and free word of mouth; free mentions
That they've agreed to sell their content on a network with ads (which in particular enables low income folks to be fans) does not entitle me to free shtyff from them; though I may also have a justified medical reason for tuning out ads entirely unless they're funny.
That's not the issue here. The issue is that YT is (arguably) using illegal measures to enforce their own rights. Just because someone infringes on your rights doesn't make it legal for you to infringe back on theirs.
You do not have the right to stream video for free from YouTube.com.
That they offer an ad-supported service does not entitle us to an ad-free service.
Limiting playback without ads is not beyond the rights of the information service provider.
I'm not contesting the legitimacy of YT fighting ad blockers. I'm merely pointing out that the claim here is that they are doing so using illegal means (by using what can be legally seen as spyware without the user's consent, which is illegal in the EU, according to the plaintiff).
In other words, the issue is not that YT fights back, it's how they do it.
Maybe it's a lack of technical understanding.
Are you familiar with keggerator systems with usage quotas? ("Free as in beer")
How can a computerized keggerator system (or a bartender) limit a person to a specific number of drafts from the tap? Is that spyware, or what you agree to when you draw from their kegs?
This is a 1 dimensional view of things.
Maybe I can simplify it for you with this question:
Does Google have the ability to legally compel you to only use chrome?
Im guessing you would say no- Antitrust and whatnot. So the next followup is, Does Google have the ability to tell blind people they cannot use screen readers? Or that people on linux cant browse the site in lynx?
Again, I am guessing the answer is no- Theres anti competitive, antitrust and 230c reasons why. Legally, ad blocking is fine to do. Heres a great article going over it : https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?art...
HOWEVER- I disagree with the premise that Youtube cant try to stop adblockers- They just need to do so in a way that doesnt target specifically target a user. Twitch did a system where they would not send the video stream to you until the advertisement was done playing (which was embedded in the feed itself)- So if you blocked the ad somehow, you would just look at a black screen for 15-30 seconds. This, in my opinion would be completely compliant.
> would not send the video stream to you until the advertisement was done playing
Is this really what consumers prefer?
Logged-in users necessarily carry state in some way such that they are identifiable as a logged-in user. "Session cookies" (and 'super cookies' etc) are standard practice for tracking which users are logged in.
YouTube does not and has not required login to view creators' videos and shorts.
And now don't they - just YouTube, hopefully - have to require login for their TOS to be a recognized agreement that authorizes determining whether the user is logged in and not stealing hours of free services.
Nobody claim to be entitled to free video, just a reasonable amount of advertising. Which is not what I (for example) get when I go on YouTube.
Google already has a definitive solution: close down the free access under a registration+payment (reasonable one). Why they are still serving free content? Why they don't get the moneys from the viewers directly?
I think that the reason is that they are inflating the number of viewers to everyone (content creator, stakeholders, etc.). You can fake viewers, you can't fake revenue. So, Google wants us to get more advertising so that they can claim that the number of views in ads increased, to earn more.
Why are you/they trying to force YouTube to require login to view video?!
Isn't this about privacy!? How can free video plays have privacy if login is required to prevent freeloading hours of free service that others pay for?
That would be a significant pivot away from free video that democratizes video, and from video URLs that people share to walled garden video URLs.
I won't mind to watch some ADs from time to time. However, these guys have gotten too greedy with their advertisements and "buy premium" shit popping up every now and then. Heck, my experience without a blocker was that I had 2 freaking 15-second inserts every 5 minutes of video.
I think a lot of that is the (musical) artists/businesspeople demanding a higher - most reasonable - cut, so we all get more ads.
Just think how expensive YouTube would be if artists started demanding royalties for works that match their video fingerprints (in addition to the payouts according to audio fingerprints that YouTube pioneered).
Encoding, moderation, storage, and bandwidth cost money. I'm sure the YouTube financials and margin are posted.
A video streaming service can afford to operate without ads only if: _____.
You understand this if you've ever tried to host (multiply-reencoded) video on a shared hosting service with a bandwidth quota for $10-$20 a month.
https://WebMonetization.org/ is one proposed solution to advertising-supported media.
AI chemist finds molecule to make oxygen on Mars after sifting through millions
From the article:
> The AI chemist used a robot arm to collect samples from the Martian meteorites, then it employed a laser to scan the ore. From there, it calculated more than 3.7 million molecules it could make from six different metallic elements in the rocks — iron, nickel, manganese, magnesium, aluminum and calcium.
> Within six weeks, without any human intervention, the AI chemist selected, synthesized and tested 243 of those different molecules.
I expected something less automated. It took a few time reading this to see that the robotic research labs from computer games I used to play are now here.
"Automated synthesis of oxygen-producing catalysts from Martian meteorites by a robotic AI chemist" (2023) https://www.nature.com/articles/s44160-023-00424-1 :
> Living on Mars requires the ability to synthesize chemicals that are essential for survival, such as oxygen, from local Martian resources. However, this is a challenging task. Here we demonstrate a robotic artificial-intelligence chemist for automated synthesis and intelligent optimization of catalysts for the oxygen evolution reaction from Martian meteorites. The entire process, including Martian ore pretreatment, catalyst synthesis, characterization, testing and, most importantly, the search for the optimal catalyst formula, is performed without human intervention. Using a machine-learning model derived from both first-principles data and experimental measurements, this method automatically and rapidly identifies the optimal catalyst formula from more than three million possible compositions. The synthesized catalyst operates at a current density of 10 mA cm−2 for over 550,000 s of operation with an overpotential of 445.1 mV, demonstrating the feasibility of the artificial-intelligence chemist in the automated synthesis of chemicals and materials for Mars exploration.
Terraforming: https://en.wikipedia.org/wiki/Terraforming
Ethics of terraforming: https://en.wikipedia.org/wiki/Ethics_of_terraforming
Terraforming of Mars: https://en.wikipedia.org/wiki/Terraforming_of_Mars :
> Mars doesn't have an intrinsic global magnetic field, but the solar wind directly interacts with the atmosphere of Mars, leading to the formation of a magnetosphere from magnetic field tubes.[14] This poses challenges for mitigating solar radiation and retaining an atmosphere.
> The lack of a magnetic field, its relatively small mass, and its atmospheric photochemistry, all would have contributed to the evaporation and loss of its surface liquid water over time.[15] Solar wind–induced ejection of Martian atmospheric atoms has been detected by Mars-orbiting probes, indicating that the solar wind has stripped the Martian atmosphere over time. For comparison, while Venus has a dense atmosphere, it has only traces of water vapor (20 ppm) as it lacks a large, dipole-induced, magnetic field.[14][16][15] Earth's ozone layer provides additional protection. Ultraviolet light is blocked before it can dissociate water into hydrogen and oxygen. [17]
Oxygen evolution: https://en.wikipedia.org/wiki/Oxygen_evolution
Fast Symbolic Computation for Robotics
https://github.com/sympy/sympy/issues/9479 suggests that multivariate inequalities are still unsolved in SymPy, though it looks like https://github.com/sympy/sympy/pull/21687 was merged in August. This probably isn't yet implemented in C++ in SymForce yet?
Is my toddler a stochastic parrot?
Language acquisition > See also: https://en.wikipedia.org/wiki/Language_acquisition
Phonological development: https://en.wikipedia.org/wiki/Phonological_development
Imitation > Child development: https://en.wikipedia.org/wiki/Imitation#Child_development
https://news.ycombinator.com/item?id=33800104 :
> "The Everyday Parenting Toolkit: The Kazdin Method for Easy, Step-by-Step, Lasting Change for You and Your Child" https://www.google.com/search?kgmid=/g/11h7dr5mm6&hl=en-US&q...
> "Everyday Parenting: The ABCs of Child Rearing" (Kazdin, Yale,) https://www.coursera.org/learn/everyday-parenting
> Re: Effective praise and Validating parenting [and parroting]
US surgeons perform first whole eye transplant
Pretty incredible, though I am doubtful of the optic nerve regeneration because of the absolutely insane density of the nerve fiber. Seems like something that will be beyond the grasp of science for the foreseeable future, but the possibility of the unexpected is exciting.
> I am doubtful of the optic nerve regeneration because of the absolutely insane density of the nerve fiber. Seems like something that will be beyond the grasp of science for the foreseeable future
It's been done quite successfully in mice [0]. Last I checked, it was being tested on primates. The method relies on activating the Yamanaka factors used in stem cell research.
Your link is about gene therapy in the eyes of mice, and is specifically a method designed as an alternative to transplant:
> “This new approach, which successfully reverses multiple causes of vision loss in mice without the need for a retinal transplant, represents a new treatment modality in regenerative medicine.”
And that's just retinal transplant, much less whole-eye transplant.
The link provided is also about about a method to produce optic nerve regeneration, regardless of whether there has been a transplant or not. Unless you have a reason to believe that it would not work in the case of a transplant.
Retina or optic nerve: how do the regenerative methods differ?
Visual system > System overview: https://en.wikipedia.org/wiki/Visual_system :
> Mechanical: Together, the cornea and lens refract light into a small image and shine it on the retina. The retina transduces this image into electrical pulses using rods and cones. The optic nerve then carries these pulses through the optic canal. Upon reaching the optic chiasm the nerve fibers decussate (left becomes right). The fibers then branch and terminate in three places. [1][2][3][4][5][6][7]
>Neural: Most of the optic nerve fibers end in the lateral geniculate nucleus (LGN).
https://news.ycombinator.com/item?id=36912925 , ... :
- "Direct neuronal reprogramming by temporal identity factors" (2023) https://www.pnas.org/doi/10.1073/pnas.2122168120#abstract
- "Retinoid therapy restores eye-specific cortical responses in adult mice with retinal degeneration" (2022) https://www.cell.com/current-biology/fulltext/S0960-9822(22)...
- "Genetic and epigenetic regulators of retinal Müller glial cell reprogramming" (2023) https://www.sciencedirect.com/science/article/pii/S266737622...
- https://en.wikipedia.org/wiki/Tissue_nanotransfection#Techni... Ctrl-F "neurons"
Regeneration in humans > Induced regeneration: https://en.wikipedia.org/wiki/Regeneration_in_humans#Induced...
Thermal transistors handle heat with no moving parts
"Test Processor With New Thermal Transistors Cools Chip Without Moving Parts" https://www.tomshardware.com/news/test-processor-with-new-th... :
> Compared to normal cooling methods, the experimental transistors were 13 times better.
"Electrically gated molecular thermal switch" (2023) https://www.science.org/doi/10.1126/science.abo4297 :
> Abstract: Controlling heat flow is a key challenge for applications ranging from thermal management in electronics to energy systems, industrial processing, and thermal therapy. However, progress has generally been limited by slow response times and low tunability in thermal conductance. In this work, we demonstrate an electronically gated solid-state thermal switch using self-assembled molecular junctions to achieve excellent performance at room temperature. In this three-terminal device, heat flow is continuously and reversibly modulated by an electric field through carefully controlled chemical bonding and charge distributions within the molecular interface. The devices have ultrahigh switching speeds above 1 megahertz, have on/off ratios in thermal conductance greater than 1300%, and can be switched more than 1 million times. We anticipate that these advances will generate opportunities in molecular engineering for thermal management systems and thermal circuit design.
>Can switch at 1MHz
>can be switched "more than 1 million times"
Seems like longevity is a potential issue. Definitely could be useful for a few applications (especially temperature control), though I'm not really sure about a pure cooling application.
Firmware Software Bill of Materials (SBoM) Proposal
Back in the day, we used to embed strings into the translation units that would report the original name and version of the file. One could use the 'strings' command to get detailed information about what files were used in a binary and which version! DVCS (git) broke that, so most people don't remember.
Knowing know which version of a file made it into a binary still doesn't really help you, though. The compiler used (if any), the version of the compiler and linker, and even the settings / flags used affect the output and -- in some cases -- could convert an otherwise secure program into something exploitable.
A Software BoM sounds like a "first step" towards documenting a supply chain, but I'm not sure it's in the right direction.
This feels like this might actually be a use-case for a blockchain or a Merkle Tree.
Consider: A file exists in a git repository under a hash, which theoretically (excluding hash collisions) uniquely identifies a file. Embed the file hashes in the executable along with a repository URL and you essentially know which files were used to build a file. Sign the executable to ensure it's not tampered with, then upload the hash of the executable to a block chain.
If your executable is a compiler, then when someone else builds an executable then they can embed the hash of the compiler into the executable to link the binary back to the specific compiler build that made the binary. The compiler could even include into the binary the flags used to modify the compiler behavior.
>This feels like this might actually be a use-case for a blockchain or a Merkle Tree.
A few years ago, a similar idea for firmware binary security[0] had been explored by Google as a possible application of their Trillian[1] distributed ledger, which is based on Merkle Trees.
I don't know if they've advanced adoption of Trillian for firmware, however, the website lists Go packaging[2], Certificate Transparency[3], and SigStore[4] as current applications.
[0] https://github.com/google/trillian-examples/tree/master/bina...
[2] https://go.googlesource.com/proposal/+/master/design/25530-s...
Sigstore artifact signature verification may be part of a SLSA secure software supply chain workflow.
slsa-framework/slsa-github-generator > Generate [signed] provenance metadata : https://github.com/slsa-framework/slsa-github-generator#gene... :
> Supply chain Levels for Software Artifacts, or SLSA (salsa), is a security framework, a check-list of standards and controls to prevent tampering, improve integrity, and secure packages and infrastructure in your projects, businesses or enterprises.
> SLSA defines an incrementally-adoptable set of levels which are defined in terms of increasing compliance and assurance. SLSA levels are like a common language to talk about how secure software, supply chains and their component parts really are.
The impossible Quantum Drive that defies known laws of physics reached space
> “I don’t know of any other purely electric drives ever tested in space,” Mansell told The Debrief, including the controversial EMDrive, which, he noted, relies on a completely different technology but also claims to produce thrust without propellant. “If so, this will be the first time a purely electric, “non-conventional” drive will have ever been tested in space!”
Nvidia H200 Tensor Core GPU
The H200 GPU die is the same as the H100, but its using a full set of faster 24GB memory stacks:
https://www.anandtech.com/show/21136/nvidia-at-sc23-h200-acc...
This is an H100 141GB, not new silicon like the Nvidia page might lead one to believe.
It is remarkable how much GPU compute is limited by memory speed.
What would make [HBM3E] GPU memory faster?
High Bandwidth Memory > HBM3E: https://en.wikipedia.org/wiki/High_Bandwidth_Memory#HBM3E
Compared to HBM3, you mean?
The memory makers bump up the speed the memory itself is capable of through manufacturing improvements. And I guess the H100 memory controller has some room to accept the faster memory.
More technically, I suppose.
Is the error rate due to quantum tunneling at so many nanometers still a fundamental limit to transistor density and thus also (G)DDR and HBM performance per unit area, volume, and charge?
https://news.ycombinator.com/item?id=38056088 ; a new QC and maybe in-RAM computing architecture like HBM-PM: maybe glass on quantum dots in synthetic DNA, and then still wave function storage and transmission; scale the quantum interconnect
Is melamine too slow for >= HBM RAM?
My understanding is that while quantum tunneling defines a fundamental limit to miniaturization of silicon transistors we are still not really near that limit. The more pressing limits are around figuring out how to get the EUV light to consistently draw denser and denser patterns correctly.
From https://news.ycombinator.com/item?id=35380902 :
> Optical tweezers: https://en.wikipedia.org/wiki/Optical_tweezers
> "'Impossible' photonic breakthrough: scientist manipulate light at subwavelength scale" https://thedebrief.org/impossible-photonic-breakthrough-scie... :
>> But now, the researchers from Southampton, together with scientists from the universities of Dortmund and Regensburg in Germany, have successfully demonstrated that a beam of light can not only be confined to a spot that is 50 times smaller than its own wavelength but also “in a first of its kind” the spot can be moved by minuscule amounts at the point where the light is confined
FWIU, quantum tunneling is regarded as error to be eliminated in digital computers; but may be a sufficient quantum computing component: cause electron-electron wave function interaction and measure. But there is zero or 1 readout in adjacent RAM transistors. Lol "Rowhammer for qubits"
"HBM4 in Development, Organizers Eyeing Even Wider 2048-Bit Interface" (2023) https://news.ycombinator.com/item?id=37859497
Low current around roots boosts plant growth
Fungal mycelial networks can form an underground network capable of transmitting electricity from plant to plant.
I wonder if we tune into just the right frequency plants are already using for communication, we can send more “grow, please” signals.
There is a Swiss startup which is doing something like this. They have created a plant sensor that taps into the electrical signals of plants, and use AI to develop an understanding of plant communication. The use case seems to be early diagnosis of stress rather than manipulation of plants, but who knows some day the same understanding can be used to 'control' plants: https://vivent.ch/
Is there nonlinearity due to the observer effect in this system?
From how many meters away can a human walking in a forest be detected with such an organic signal network?
FWIU mycorrhizae networks all broadcast on the same channel? Is it full duplex; are they transmitting and receiving simulatenously?
Show HN: I wrote a multicopter simulation library in Python
* [Documentation](https://multirotor.readthedocs.io/en/latest/)
* [Source code](https://github.com/hazrmard/multirotor)
* [Demo/Quickstart](https://multirotor.readthedocs.io/en/latest/Quickstart.html)
There are many simulation libraries out there. For example AirSim using Unreal Engine, several implementations in Unity3D, Matlab toolboxes. I wanted a simple hackable codebase with which to experiment.
So, I wrote this. Propellers, motors, batteries, airframe are their own components and can be mixed and matched. The code lets you create any number of propellers, and an optimization function learns a PID controller for that vehicle. Additionally, there are convenience functions to visualize in 3D and sensor measurements.
Please let me know what you think :)
Could it output drones for existing sims?
X-Plane Plane Maker: https://developer.x-plane.com/manuals/planemaker/
Juno: New Origins (and also Hello Engineer)
MS Flight Simulator cockpits are built with MSFS Avionics Framework which is React-based: https://docs.flightsimulator.com/html/Introduction/SDK_Overv...
https://news.ycombinator.com/item?id=37619564 :
> [Multi-objective gym + MuJoCo] for drone simulation
> Idea: Generate code like BlenderGPT to generate drone rover sim scenarios and environments like the Moon and Mars
https://news.ycombinator.com/item?id=36052833 :
> awesome finite element analysis https://www.google.com/search?q=awesome+finite+element+analy...
Also: awesome-cfd
https://news.ycombinator.com/item?id=31049608 :
> Numerical methods in fluid mechanics: https://en.wikipedia.org/wiki/Numerical_methods_in_fluid_mec...
Re: X-plane output. As it stands, no. But I believe a translation script can be made to output the vehicle's properties into a different format. Currently it is a python dataclass.
Thank you for linking to other resources. I will take a look at them.
Np. Interesting field.
"How to create an aircraft [for MS Flight Simulator]" https://docs.flightsimulator.com/html/mergedProjects/How_To_...
Gymnasium w/ MuJoCo or similar would probably be most worthwhile in terms of research,
GlTF: https://en.m.wikipedia.org/wiki/GlTF
/? gltf msfs https://www.google.com/search?q=gltf+msfs
https://github.com/AsoboStudio/glTF-Blender-IO-MSFS :
> Microsoft Flight Simulator glTF 2.0 Importer and Exporter for Blender
MSFS docs on the Blender plugin: https://docs.flightsimulator.com/html/Asset_Creation/Blender...
There must be fluid simulation in MSFS and XPlane because they model helicopter flight characteristics.
I don't think either model things like solar thermal effect upon wings' material properties yet.
* Documentation: https://multirotor.readthedocs.io/en/latest/
* Source code: https://github.com/hazrmard/multirotor
* Demo/Quickstart: https://multirotor.readthedocs.io/en/latest/Quickstart.html
GraphCast: AI model for weather forecasting
To call this impressive is an understatement. Using a single GPU, outperforms models that run on the world's largest super computers. Completely open sourced - not just model weights. And fairly simple training / input data.
> ... with the current version being the largest we can practically fit under current engineering constraints, but which have potential to scale much further in the future with greater compute resources and higher resolution data.
I can't wait to see how far other people take this.
It builds on top of supercomputer model output and does better at the specific task of medium term forecasts.
It is a kind of iterative refinement on the data that supercomputers produce — it doesn’t supplant supercomputers. In fact the paper calls out that it has a hard dependency on the output produced by supercomputers.
"BLD,ENH: Dask-scheduler (SLURM,)," https://github.com/NOAA-EMC/global-workflow/issues/796
Dask-jobqueue https://jobqueue.dask.org/ :
> provides cluster managers for PBS, SLURM, LSF, SGE and other [HPC supercomputer] resource managers
Helpful tools for this work: Dask-labextension, DaskML, CuPY, SymPy's lambdify(), Parquet, Arrow
GFS: Global Forecast System: https://en.wikipedia.org/wiki/Global_Forecast_System
TIL about Raspberry-NOAA and pywws in researching and summarizing for a comment on "Nrsc5: Receive NRSC-5 digital radio stations using an RTL-SDR dongle" (2023) https://news.ycombinator.com/item?id=38158091
Future is quantum: universities look to train engineers for an emerging industry
I don't see how you can really get a decent grasp of quantum with an undergrad. The standard American physics curriculum has some quantum in sophomore year with Modern Physics, and then Quantum in Junior/Senior year. But you can't exactly skip mechanics, E&M and all of the mathematics (Calc 1-3, Diff EQ, Partial Diff EQ, Linear Algebra) you need a background in. So you pretty much need 2 years of prep to really start learning. Even if you add some specific technology courses around the engineering, how do you get around this undergraduate program not being a Physics or Applied Physics degree, without throwing the baby out with the bathwater?
You can do applied quantum logic in an afternoon (with e.g. colab and cirq, qiskit, and/or tequila) but then how much math is necessary; what is a "real conjugate"?
In the same way you an "do ML" without knowing linear algebra and probability theory. Such people can barely extend anything, let alone design new models from scratch.
E.g. Quantum embedding isn't yet taught to undergrads, and can be quickly explained to folks interested in the field, who might not be deterred by laborious newspaper summarizations, and who might pursue this strategic and critical skill.
How many ways are there to roll a 6-sided die with qubits and quantum embedding?
It took years for tech to completely and entirely rid itself of the socially-broken nerd stereotypes that pervaded early digital computing as well.
How can we get enough people into QIS Quantum fields to supply demand for new talent?
How many people need to design new models from scratch though
"When are we ever going to need maths?" said the high schooler.
You use the skills you have.
That's not true, many people willfully eject information that they have no use for immediately upon leaving school.
While I somewhat regret selling most of my college textbooks back, I feel that cramming for non-applied tests and quizzes was something I needed to pay them for me to do.
TIL about memory retention; spaced repetition interval training and projects with written communications components in application
> Instead [of the Bohr model], Morello uses a real-world example in his teaching — a material called a quantum dot, which is used in some LEDs and in some television screens. “I can now teach quantum mechanics in a way that is far more engaging than the way I was taught quantum mechanics when I was an undergrad in the 1990s,” he says.
> Morello also teaches the mathematics behind quantum mechanics in a more computer-friendly way. His students learn to solve problems using matrices that they can represent using code written for the Python programming language, rather than conventional differential equations on paper.
From https://news.ycombinator.com/item?id=30782678 :
>> This "Quantum Computing for Computer Scientists" video https://youtu.be/F_Riqjdh2oM explains classical and quantum operators as just matrices. What are other good references?
Unfortunately the QuantumQ game doesn't yet have the matrix forms of the quantum logical operators in the (open source) game docs.
Would be a helpful resource, in addition to the Quantum logic wikipedia page and numpy and/or SymPy without cirq:
A Manim presentation demonstrating that quantum logical operator matrices are Bloch sphere rotations, are reversible, and why we restrict operators to the category of unitary transformations
> His colleagues at the UNSW are also developing laboratory courses to give students hands-on experience with the hardware in quantum technologies. For example, they designed a teaching lab to convey the fundamental concept of quantum spin, a property of electrons and some other quantum particles, using commercially available synthetic diamonds known as nitrogen vacancy centres
Some blue LEDs contain sapphire, which is even more macrostae entanglable than diamonds. Lol: https://news.ycombinator.com/item?id=36356444
"The Qubit Game (2022)" https://news.ycombinator.com/item?id=34574791 :
> Additional Q12 (K12 QIS Quantum Information Science) ideas?:
Cirq and other QIS libraries can implement _repr_svg_ e.g. for nice quantum circuit diagrams from code: https://github.com/quantumlib/Cirq/issues/2313
A Manim walkthrough that flies from top-down to low flyover with the wave states at each point in the circuit would be neat. Do classical circuit simulators simulate backwards, nonlinear flow of current?
Research achieves photo-induced superconductivity on a chip
> Their work, now published in Nature Communications, also shows that the electrical response of photo-excited K3C60 is not linear, that is, the resistance of the sample depends on the applied current. This is a key feature of superconductivity, validates some of the previous observations and provides new information and perspectives on the physics of K3C60 thin films.
"Superconducting nonlinear transport in optically driven high-temperature K3C60" (2023) https://www.nature.com/articles/s41467-023-42989-7 :
> Abstract: Optically driven quantum materials exhibit a variety of non-equilibrium functional phenomena, which to date have been primarily studied with ultrafast optical, X-Ray and photo-emission spectroscopy. However, little has been done to characterize their transient electrical responses, which are directly associated with the functionality of these materials. Especially interesting are linear and nonlinear current-voltage characteristics at frequencies below 1 THz, which are not easily measured at picosecond temporal resolution. Here, we report on ultrafast transport measurements in photo-excited K3C60. Thin films of this compound were connected to photo-conductive switches with co-planar waveguides. We observe characteristic nonlinear current-voltage responses, which in these films point to photo-induced granular superconductivity. Although these dynamics are not necessarily identical to those reported for the powder samples studied so far, they provide valuable new information on the nature of the light-induced superconducting-like state above equilibrium Tc. Furthermore, integration of non-equilibrium superconductivity into optoelectronic platforms may lead to integration in high-speed devices based on this effect.
Autonomous lab discovers best-in-class quantum dot in hours instead of years
> The goal in this study was to find the doped perovskite quantum dot with the highest "quantum yield," or the highest ratio of photons the quantum dot emits (as infrared or visible wavelengths of light) relative to the photons it absorbs (via UV light).
"Smart Dope: A Self-Driving Fluidic Lab for Accelerated Development of Doped Perovskite Quantum Dots," (2023) https://onlinelibrary.wiley.com/doi/10.1002/aenm.202302303
> Abstract: Metal cation-doped lead halide perovskite (LHP) quantum dots (QDs) with photoluminescence quantum yields (PLQYs) higher than unity, due to quantum cutting phenomena, are an important building block of the next-generation renewable energy technologies. However, synthetic route exploration and development of the highest-performing QDs for device applications remain challenging. In this work, Smart Dope is presented, which is a self-driving fluidic lab (SDFL), for the accelerated synthesis space exploration and autonomous optimization of LHP QDs. Specifically, the multi-cation doping of CsPbCl3 QDs using a one-pot high-temperature synthesis chemistry is reported. Smart Dope continuously synthesizes multi-cation-doped CsPbCl3 QDs using a high-pressure gas-liquid segmented flow format to enable continuous experimentation with minimal experimental noise at reaction temperatures up to 255°C. Smart Dope offers multiple functionalities, including accelerated mechanistic studies through digital twin QD synthesis modeling, closed-loop autonomous optimization for accelerated QD synthetic route discovery, and on-demand continuous manufacturing of high-performing QDs. Through these developments, Smart Dope autonomously identifies the optimal synthetic route of Mn-Yb co-doped CsPbCl3 QDs with a PLQY of 158%, which is the highest reported value for this class of QDs to date. Smart Dope illustrates the power of SDFLs in accelerating the discovery and development of emerging advanced energy materials.
Munich court tells Netflix to stop using H.265 video coding to stream UHD
I support open source alternatives to H.265, which is fairly advantageous.
I don't think governments are good at regulating technical standards for industry.
And so this is a wash.
This isn't the German government trying to regulate technical standards of an industry. This is a German court upholding a patent. Which you can of course argue all you want around but I failed to see the relevancy of your comment here.
Cathode-Retro: A collection of shaders to emulate the display of an NTSC signal
Seems like a good spot to mention https://github.com/Swordfish90/cool-retro-term
Cool Retro Terminal is a nice accessory for when doing recording or screenshots - cause it looks cool. Can't use it as my daily driver tho.
And enough settings in there you can make it look like your favourite old one.
A similar theme for JupyterLab/JupyterLite would be cool
jupyterlab_miami_nights is real nice, too https://anaconda.org/conda-forge/jupyterlab_miami_nights
DI's Synthwave station somewhat matches the decade: https://www.di.fm/synthwave
Lighter almost solarized red for terminal text is also a decent terminal experience IMHO
Show HN: Open-source digital stylus with six degrees of freedom
Very cool. The use of a webcam really makes me wonder if there's a future where our regular single ~78° FOV webcams are going to be replaced by dual (stereo) fisheye webcams that can:
- Enable all sorts of new UX interactions (gestures with eye tracking)
- Enable all sorts of new peripheral interactions (stylus like this, but also things like a steering wheel for racing games)
- Enable 3D 180° filming for far more flexible webcam meetings, including VR presence, etc.
The idea of being able to use the entire 3D space in front of your computer display as an input method feels like it's coming, and using a webcam the way OP describes feels like it's a little step in that direction.
I thought this was around the corner years ago when Intel and partners had RealSense modules being built into laptops but it seems like all the players have shifted focus to more enterprise and industrial markets.
Wii Remote (2006), Wiimote Whiteboard (2007), Kinect (2010), Leap Motion (2010- Ultraleap (2019)),
There are infrared depth cameras in various phones and laptop cameras now.
[VR] Motion controllers: https://en.wikipedia.org/wiki/Motion_controller#Gaming
Inertial navigation system: https://en.wikipedia.org/wiki/Inertial_navigation_system
Inertial measurement unit : https://en.wikipedia.org/wiki/Inertial_measurement_unit :
> An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. When the magnetometer is included, IMUs are referred to as IMMUs.[1]
Moasure does displacement estimation with inertial measurement (in a mobile app w/ just accelerometer or also compass sensor data?) IIUC: https://www.moasure.com/
/? wireless gesture recognition RSSI: https://scholar.google.com/scholar?q=wireless+gesture+recogn...
/? wireless gesture recognition RSSI site:github.com : https://www.google.com/search?q=wireless+gesture+recognition...
Awesome-WiFi-CSI-Sensing > Indoor Localization: https://github.com/Marsrocky/Awesome-WiFi-CSI-Sensing#indoor...
3D Scanning > Technology, Applications: https://en.wikipedia.org/wiki/3D_scanning#Technology
Are there a limited set of possible-path-corresponding diffraction patterns that NIRS (Near-Infrared Spectroscopy) could sense and process to make e.g. a magic pencil with pressure sensitivity, too?
/q.hnlog "quantum navigation": https://news.ycombinator.com/item?id=36222625#36250019 :
> Quantum navigation maps such signal sources such that inexpensive sensors can achieve something like inertial navigation FWIU?
From https://news.ycombinator.com/context?id=36249897 :
> Can low-cost lasers and Rdyberg atoms e.g. Rydberg Technology solve for [space-based] matter-wave interferometry? [...] Does a fishing lure bobber on the water produce gravitational waves as part of the n-body gravitational wave fluid field, and how separable are the source wave components with e.g. Quantum Fourier Transform/or and other methods?
Because the digitizer
Getting the Lorentz transformations without requiring an invariant speed (2015)
It's curious that this simple proof -which don't require invariant speed, aka Maxwell equations- was discovered after Einstein's proposal which depends on invariant speed assumption. I wonder how the history of physics would have been if someone proposed this before Einstein. The maths needed for this derivation are quite simple, so I guess Newton or some mathematician before Einstein could have proposed special relativity.
(nonlinear) retrocausality: https://news.ycombinator.com/item?id=38047149
https://news.ycombinator.com/item?id=28402527 :
/? electrodynamic engineering in the time domain, not in the 3-space EM energy density domain https://www.google.com/search?q=electrodynamic+engineering+i...
"Electromagnetic forces in the time domain" (2022) https://opg.optica.org/oe/fulltext.cfm?uri=oe-30-18-32215&id... :
> [...] On looking through the literature, we notice that several previous studies undertook the analysis of the optical force in the time domain, but at a certain point always shifted their focus to the time average force [67–69] or, alternatively, use numerical approaches to find the force in the time domain [44,70–75]. To the best of our knowledge, only a few publications conducted analytical studies of the optical force evolution. Very recent paper employs the signal theory to derive the imaginary part of the Maxwell stress tensor, which is responsible for the oscillating optical force and torque [76]. The optical force is studied under two-wave excitation acting on a half-space [40] and on cylinders [77], and a systematic analytical study of the time evolution of the optical force has not yet been reported.
If mass warps space and time nonlinearly per relevant confirmations of General Relativity, and there is observable retrocausality and also indefinite causal order, is forcing time to be the frame of reference, and to be the constant frame of reference necessary or helpful for the OT problem and otherwise?
Ask HN: What are good books on SW architecture that don't sell microservices?
There have been multiple discussions of how the microservices movement did more harm than good, how a modular monolith can be a much better option.
I wish there was a comprehensive book (ideally) that is practical, pragmatic, doesn't advocate the use of microservices just because it is cool, etc.
Some books that are often recommended have "microservices" in their names which is a pretty bad start.
For example, I am thinking of how two services should communicate (I am unfortunately guilty of having more services that I really needed). There are multiple options and the choice depends on factors like synchronous vs asynchronous so I would like to read a detailed analysis of all tradeoffs and considerations. Ideally, from authors that really know what they're talking about.
Patterns of Enterprise Application Architecture, Martin Fowler, and Enterprise Integration Patterns, Gregor Hohpe.
both predate microservices, and both are still very useful today.
"Patterns of Distributed Systems (2022)" https://martinfowler.com/articles/patterns-of-distributed-sy... and notes regarding: https://news.ycombinator.com/item?id=36504073
Computation of the n'th digit of pi in any base in O(n^2) (1997)
Squinting at this, I wonder if it's at all valid to say that the existence of a quadratic time algorithm to calculate pi has anything to do with the fact that the implicit formula of a circle is made up of quadratic terms.
In other words, if pi basically sums up the most important fact about a circle's geometry, then it's reasonable to expect that geometry to be represented somehow in the important facts about algorithms that calculate pi.
That's an interesting concept. I think similar spigot algorithms are known for other transcendentals, and I suspect if you compared them you would not find a general trend of deep connections between algorithmic complexity and the geometric features of the corresponding value. What would you look for in the spigot algorithm for e, or log 2?
I suppose e's connection to hyperbolic geometry might suggest a relationship with the implicit formula x^2 - y^2 = 1. And I guess log2's behavior would be very much connected to that since it only differs from the natural log by a constant factor.
But I know I'm reaching here. I just like fantasizing about math :).
Is there unitarity, symmetry, or conservation when x^2 ± y^2 = 1?
We square complex amplitudes to make them real.
https://twitter.com/westurner/status/967970148509503488 :
> "Partly because, mathematically, wavefunctions are vectors in a L^2 Hilbert space, which is complex-valued. Squaring the amplitude, rather Ψ∗Ψ=|Ψ|^2 is one way to ensure that you get real-valued probabilities, which is also related to the fact that […]" https://physics.stackexchange.com/questions/280748/why-do-we...
Apple’s first online store played a crucial role in the company’s resurgence
Somewhere between Macintosh & iPod + iTunes and MacOS (was: OSX (Unix, Bash,)) I think Apple was saved.
And the white cabling with the silhouettes
This is exactly how I remember and have experienced it, in that order.
MacOS became big because of the included Unix. Developers flocked. And al ballman once said... developers, developers, developers!
The Developer story is key, I think, in addition to content CDNs and SAST/DAST for apps.
iPod Linux: https://en.wikipedia.org/wiki/IPodLinux
Rockbox Firmware for an Archos and then an Nano w/ color and no WiFi: https://en.wikipedia.org/wiki/Rockbox
podman run --rm -it docker.io/busybox
xcode-select -p
# installing homebrew installs the xcode CLI tools:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install podman
brew install --cask podman-desktop
https://mac.install.guide/commandlinetools/3.htmlHow to add a Software repository to an OS. Software repository; SLSA, Sigstore, DevSecOps: https://en.wikipedia.org/wiki/Software_repository
W/ Ansible:
osx_defaults_module: https://docs.ansible.com/ansible/latest/collections/communit...
honebrew_tap_module: https://docs.ansible.com/ansible/latest/collections/communit...
honebrew_module: https://docs.ansible.com/ansible/latest/collections/communit...
homebrew_cask_module https://docs.ansible.com/ansible/latest/collections/communit...
From my upgrade_mac.sh: https://github.com/westurner/dotfiles/blob/develop/scripts/u... :
upgrade_macos() {
softwareupdate --list
softwareupdate --download
softwareupdate --install --all --restart
}
From https://twitter.com/mitsuhiko/status/1720410479141487099 :
> GitHub Actions currently charges $0.16 *per minute* for the macOS M1 Runners. That comes out to $84,096 for 1 machine year
GitHub Runner is written in Go; it fetches tasks from GitHub Actions and posts the results back to the Pull Request that spawned the build.
nektos/act is how Gitea Actions builds GitHub Actions workflow YAML build definition documents. https://github.com/nektos/act
From https://twitter.com/MatthewCroughan/status/17200423527675700... :
> This is the macOS Ventura installer running in 30 VMs, in 30 #nix derivations at once. It gets the installer from Apple, automates the installation using Tesseract OCR and TCL Expect scripts. This is to test the repeatability. A single function call `makeDarwinImage`.
With a Multi-Stage Dockerfile/Containerfild, you can have a dev environment like xcode or gcc+make in the first stage that builds the package, and then the second stage the package is installed and tested, and then the package is signed and published to a package repo / app store / OCI container image repository.
Continuous integration: https://en.wikipedia.org/wiki/Continuous_integration
Is there a good way to do automated testing like pytest+Hypothesis+tox w/ e.g. the Swift programming language for computers? CloudFuzz is built upon OSS-Fuzz.
SLSA now specifies builders for signing things correctly in CI builds with keys in RAM on the build workers.
"Build your own SLSA 3+ provenance builder on GitHub Actions" https://slsa.dev/blog/2023/08/bring-your-own-builder-github
140-year-old ocean heat tech could supply islands with limitless energy
> Known as ocean thermal energy conversion or ‘OTEC,’ the technology was first invented in 1881 by French physicist Jacques Arsene d’Arsonval. He discovered that the temperature difference between sun-warmed surface water and the cold depths of the ocean could be harnessed to generate electricity. [...]
> For OTEC to work it requires a temperature difference between hot and cold water of around 20 degrees Celsius. This can only be found in the tropics, which is not a problem in itself.
OTEC: Ocean thermal energy conversion: https://en.wikipedia.org/wiki/Ocean_thermal_energy_conversio...
We used to build steel mills near cheap power. Now we build datacenters
To push waste heat to a building across the street, it's usually necessary to add heat at the source n order to more efficiently transfer the thermal energy to the receiver.
OTOH other synergies that require planning and/or zoning:
- Algae plants can capture waste CO2.
- Datacenters produce steam-sterilized water that's usually tragically not fed back into water treatment.
- Smokestacks produce CO2, Nitrogen, and other flue gases that are reusable by facilities like Copenhill and probably for production of graphene or similar smokestack air filters.
Facebook Is Ending Support for PGP Encrypted Emails
> Once a hacker gains access to a Facebook account, they can proceed to activate email encryption.
> This renders recovery emails sent to the user’s email address unreadable, as only the hacker has the encryption keys.
So: PGP encrypted emails were rarely used, except to lock out the legit user after account was compromised.
Github asks you to log in again to add SSH keys in, this could've been similar
They're just looking for excuses
A lot of account compromise is due to reused passwords so I'm not sure that's a complete solution.
Sending a PGP-encrypted email with a verification link to activate the feature should solve that.
Vacuum in optical cavity can change material magnetic state wo laser excitation
Why Cities: Skylines 2 performs poorly
For a bit of reference, a full frame of Crysis (benchmark scene) was around 300k vertices or triangles (memory is fuzzy), so 3-10 log piles depending on which way my memory is off and how bad the vertex/triangle ratio is in each.
Author here: I never bothered counting the total vertices used per frame because I couldn't figure out an easy way to do it in Renderdoc. However someone on Reddit measured the total vertex count with ReShade and it can apparently reach hundreds of millions and up to 1 billion vertices in closeups in large cities.
Edit: Checked the vert & poly counts with Renderdoc. The example scene in the article processes 121 million vertices and over 40 million triangles.
> The issues are luckily quite easy to fix, both by creating more LOD variants and by improving the culling system
How many polygons are there with and without e.g. AutoLOD/InstaLOD?
An LLM can probably be trained to simplify meshes and create LOD variants with e.g. UnityMeshSimplifier?
Whinarn/UnityMeshSimplifier: https://github.com/Whinarn/UnityMeshSimplifier :
> Mesh simplification for Unity. The project is deeply based on the Fast Quadric Mesh Simplification algorithm, but rewritten entirely in C# and released under the MIT license.
Mesh.Optimize: https://docs.unity3d.com/ScriptReference/Mesh.Optimize.html
Unity-Technologies/AutoLOD: https://github.com/Unity-Technologies/AutoLOD
"Unity Labs: AutoLOD - Experimenting with automatic performance improvements" https://blog.unity.com/technology/unity-labs-autolod-experim...
InstaLOD: https://github.com/InstaLOD
"Simulated Mesh Simplifier": https://github.com/Unity-Technologies/AutoLOD/issues/4 :
> Yes, we had started work on a GPU-accelerated simplifier using QEM, but it was not robust enough to release.
"Any chance of getting official support now that Unreal has shown off it's AutoLOD?" https://github.com/Unity-Technologies/AutoLOD/issues/71#issu... :
> "UE4 has had automatic LOD generation since it first released - I was honestly baffled when I realized that Unity was missing what I had assumed to be a basic feature.*
> Note that Nanite (which I assume you're referring to) is not a LOD system, despite being similar in the basic goal of not rendering as many polygons for distant objects.
"Unity: Feature Request: Auto - LOD" (2023-05) https://forum.unity.com/threads/auto-lod.1440610/
"Discussion about Virtualized Geometry (as introduced by UE5)" https://github.com/godotengine/godot-proposals/issues/2793
UE5 Unreal Engine 5 docs > Rendering features > Nanite: https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Na...
Unity-GPU-Based-Occlusion-Culling: https://github.com/przemyslawzaworski/Unity-GPU-Based-Occlus...
Tractor Beams Are Real, and Could Solve a Major Space Junk Problem
"With the commercial space industry booming, the number of satellites in Earth's orbit is forecast to rise sharply. This bonanza of new satellites will eventually wear out and turn the space around Earth into a giant junkyard of debris..."
All modern commercial satellites are required to have a safe deorbit plan. The FCC regulates that LEO satellites must deorbit within 5 years of mission completion.
Disclaimer: I work for a commercial satellite operator. Our satellites deorbit and burn up without intervention at the end of their lifecycle.
>Our satellites deorbit and burn up without intervention at the end of their lifecycle.
I'm curious how this works. Is there a certain amount of propellant reserved for the deorbit plan?
Typically yes... Alternative mechanisms include using solar panels to increase drag, and some companies are experimenting with devices that interact with the earths magnetic field to produce electromagnetic drag…
But reserving fuel for “decommissioning operations” is standard practice for satellite operators and the space industry in general.
Are there any unboosted or boosted orbital trajectories that deorbit by "ejection" rather than atmospheric friction and pollution?
Is the minimum perturbation necessary to "eject from earth orbit" lower in an Earth-Moon Lunar cycler orbit? (And, If decommissioned, why shouldn't the ISS be placed into a Lunar Cycler Earth-Moon orbit to test systems failure, lunar cycler orbits, and extra- Van-Allen radiation's impact on real systems failure?)
Couldn't you build those out of recyclable proton batteries and heat-shielded bioplastic?
"Falling metal space junk is changing Earth's upper atmosphere in ways we don't fully understand" (2023) and also solar microwave power beaming and fluids https://www.livescience.com/space/space-exploration/falling-...
"Metals from spacecraft reentry in stratospheric aerosol particles" (2023) https://www.pnas.org/doi/full/10.1073/pnas.2313374120
There sort of are some ejection disposals, but they are rarely used due to how specific the orbital parameters need to be in order for that to be the “cheaper” option.
You usually find it’s not exactly the sort of disposals your expecting though. Several space probes have used launch trajectories where their final booster stage will be placed into a heliocentric orbit, with the general trajectory placing it on an uncontrolled gravity assist and then the space probe will use a very small amount of fuel to precisely control the gravity assist it receives in order to get where it needs to be going… and the booster stage will have its already heliocentric orbit further changed by the gravity assist.
It’s all about the deltaV … it takes way less to deorbit LEO and even some of the MEO satellites than to perform a lunar orbit raising, and GEO would be too costly to do either when which is why the GEO graveyard orbit belt is quite well defined… as for boosting the ISS up towards the moon to perform a Salyut 7 style long duration equipment survival experiment…. That’s a lot of fuel, and we already have that experience since we’ve been monitoring the entire ISS since we launched it…
Would it be easier to band together the currently-unrecyclable orbital debris and waste and boost bundles of usable material with e.g. solar until it's usable in orbit or on the ground of a planet?
Someone probably has the costs to lift n kg of payload into orbit then and today in today dollars.
Matter persists in cycler orbits around attractors.
Is there a minimum escape velocity for each pending debris object, and then also for a moon gravity-assist solar destination orbit?
How much solar energy per kg of orbital mass is necessary to dispose by ejection in some manner?
I'm reminded of Wonka's cane and the Loompland river. Can such orbits be simulated with Kerbal Space Program 2?
There’s two fundamental issues with that sort of approach:
1st, is the deltaV difference between useful and desirable orbits that satellite operators want to be in and the selected “disposal orbit band”… with GEO it’s just a small deltaV Hohmann transfer “up” into the geo graveyard “band”… the graveyard starts just 300km further out from earth and the geostationary satellite typically use on the order of 11m/s… which is bugger all. The Manned Manoeuvring Unit “EVA jetpack” had approximately 25m/s.
2nd, is sometimes overlooked, but nonetheless quite important aspect. Any graveyard orbit intended for use by multiple objects should cross the orbits of as few active spacecraft as possible and have as low of a relative velocity between the objects in that graveyard orbit as possible… Geosynchronous satellites are basically above all but a small number of very specific satellites in places like lunar orbit or Lagrange points, and “in a ring” so, performing a gentle boost to the graveyard orbit gets them out of the way of almost everything and they are at least as far as satellites go, not moving particularly fast relative to each other, and they are so high up nothing else is going to regularly cross their obit, so it’s a relatively low risk environment for a satellite on satellite collision…
the majority of cycling orbits tend to have a LOT of relative velocity compared to the orbits they cross, tangential crossings leave high relative velocity, and the elongated orbits tend to end up crossing through a lot of space as the orbital precession moves along… on top of that the deltaV to go from most orbits to a cycling orbit tends to be relatively high, not as much as a full transfer to lunar orbit, but it’s basically in the same ballpark as using the moon to kick yourself into a solar orbit, which while potentially doable at considerable extra cost for a GEO satellite, is completely impossible for the sort of smaller LEO and MEO satellites without an entire kick stage, tug or other propulsion which would at least for LEO probably weigh more than the satellite does.
Such graveyarded debris and waste is potentially reusable in orbit, on the Moon, and on Mars.
~space tugs with electric propulsion: https://g.co/bard/share/df6189ac8113
Is there an effective fulcrum on a lever in space if you attach thrusters at one end; like a space baseball bat?
MEU: "Mission Extension Unit"
How many kwh of (solar) electricity would be necessary to transfer to and from lunar cycler orbit to earth orbit to stay within the belt? 8, orbit, 8; or 8,8,orbit,8,8 etc?
FWIU there are already-costed launch and refuel mission plans?
Launch rocket_2 with fuel for rocket_1 which is already in orbit, attach to object_1, and apply thrust towards an optimal or sufficient gravity-assisted solar trajectory or bundle of space recyclables.
Is there any data on space stations in Lunar Cycler orbits?
Are there Lunar Cycler orbits that remain within the van Allen radiation belt?
What would be the costs and benefits of long-term positioning of a space station or other vessel with international docking adapter(s) in a Lunar cycler orbit?
Nrsc5: Receive NRSC-5 digital radio stations using an RTL-SDR dongle
A GUI built on top of this: https://github.com/markjfine/nrsc5-dui
From https://github.com/markjfine/nrsc5-dui#maps :
> Maps: When listening to radio stations operated by iHeartMedia, you may view live traffic maps and weather radar. The images are typically sent every few minutes and will fill the tab area once received, processed, and loaded. Clicking the Map Viewer button on the toolbar will open a larger window to view the maps at full size. The weather radar information from the last 12 hours will be stored and can be played back by selecting the Animate Radar option. The delay between frames (in seconds) can be adjusted by changing the Animation Speed value. Other stations provide Navteq/HERE navigation information... it's on the TODO 'like to have' list.
Is this an easier way to get weather info without Internet than e.g. Raspberry-NOAA and a large antenna?
https://www.google.com/search?q=weather+satellite+antenna+ha... https://github.com/jekhokie/raspberry-noaa-v2#raspberry-noaa... :
> NOAA and Meteor-M 2 satellite imagery capture setup for the regular 64 bit Debian Bullseye computers and Raspberry Pi!
> Is this an easier way to get weather info without Internet than e.g. Raspberry-NOAA and a large antenna?
If you're OK with audio only, you cant beat NOAA weather radio: https://www.weather.gov/nwr/
You can listen with a SDR, or any number of cheap radios.
If you live within the coverage area of an FM radio station that's sending weather radar, it will probably be easier to receive than NOAA satellites.
I'm a co-maintainer of this project. If anyone has questions, I'd be happy to answer them.
Would it be feasible to do something similar with OpenWRT opkg packages to support capturing weather radar (and weather forecasts and alerts?) data from digital FM radio with a USB RTL-SDR radio?
Python apps require a bunch of disk space, which is at a premium on low-wattage always-on routers.
OpenWRT's luci-app-statistics application supports rrdtool and collectd for archived stats over time (optionally on a USB stick or an SSD instead of the flash ROM of the router, which has a max lifetime in terms of number of writes) https://github.com/openwrt/luci/tree/master/applications/luc...
From https://news.ycombinator.com/item?id=38138230 :
> LuCI is the OpenWRT web UI which is written in Lua; which is now implemented mostly as a JSON-RPC API instead of with server-side HTML templates for usability and performance on embedded devices. [...] Notes on how to write a LuCI app in Lua:
It might be possible, but I'm not sure whether a typical router would have enough CPU horsepower to do the processing required to demodulate the signal.
What models of Raspberry Pi are sufficient, or how many Mhz and RAM are necessary to demodulate an HD radio stream?
(Pi Pico, Pi Zero, and Pi A+/B+/2/3/4 have 2x20 pin headers for HATs. Orange Pi 5 Plus has hardware H.265 encoding with hw-enc and gstreamer fwiu.)
I haven't investigated the CPU and RAM requirements in depth, but I have used nrsc5 on a Pi 3B without issue.
I suspect a Pi Pico would be too small.
This is actually extremely useful when hooked up to a laptop for traveling, because of the embedded traffic information and maps in the sideband data.
If you're willing to share more on this subject, color me interested.
Some stations send out traffic and weather images (as well as album art and station logos). The files can be dumped to disk using nrsc5's "--dump-aas-files" option. A few people have built GUIs that display the information in a more convenient way:
https://github.com/cmnybo/nrsc5-gui https://github.com/markjfine/nrsc5-dui https://github.com/KYDronePilot/hdfm
In addition to files, some stations also send out "stream" and "packet" data. There is ongoing work to reverse engineer the formats. See the discussion here for details: https://github.com/theori-io/nrsc5/pull/308
Are there yet Clock, Weather Forecast, or Emergency Alert text data channels in digital FM radio?
FWIU there are also DVB data streams?
Time information is broadcast, but in my experience it's often inaccurate.
There's also a special stream for emergency alerts, but I haven't seen it in use.
There are various data streams, but not DVB.
A lot of the details are described in the standard: https://www.nrscstandards.org/standards-and-guidelines/docum...
DVB-T could technically carry clock, weather forecasts, and alerts as text data feeds.
What needs to be done to link WEA Wireless Emergency Alerts with HD radio data streams? WX radio could possibly embed a data channel? If it doesn't already for e.g. accessible captioning?
DVB-T: https://en.wikipedia.org/wiki/DVB-T :
> This system transmits compressed digital audio, digital video and other data in an MPEG transport stream, using coded orthogonal frequency-division multiplexing (COFDM or OFDM) modulation.
From https://www.rtl-sdr.com/about-rtl-sdr/ :
> The origins of RTL-SDR stem from mass produced DVB-T TV tuner dongles that were based on the RTL2832U chipset. [...]
> Over the years since its discovery RTL-SDR has become extremely popular and has democratized access to the radio spectrum. Now anyone including hobbyists on a budget can access the radio spectrum. It's worth noting that this sort of SDR capability would have cost hundreds or even thousands of dollars just a few years ago. The RTL-SDR is also sometimes referred to as RTL2832U, DVB-T SDR, DVB-T dongle, RTL dongle, or the "cheap software defined radio"
From https://www.reddit.com/r/RTLSDR/comments/6nsnqy/comment/dkbv... :
> [You need an upconverter to receive the time from the WWV shortwave clock station on 2.5, 5, 10, 15, and 20 MHz] http://www.nooelec.com/store/ham-it-up.html
From https://news.ycombinator.com/item?id=37712506 :
> TIL there's a regular heartbeat in the quantum foam; [...] https://journals.aps.org/prresearch/abstract/10.1103/PhysRev...
FCC wants to bolster amateur radio
From "WebSDR – Internet-connected Software-Defined Radios" (2023) https://news.ycombinator.com/item?id=38034417 :
> pipewire-screenaudio: https://github.com/IceDBorn/pipewire-screenaudio :
>> Extension to passthrough pipewire audio to WebRTC Screenshare
> awesome-amateur-radio#sdr https://github.com/mcaserta/awesome-amateur-radio#sdr
> The OpenWRT wiki lists a few different weather station apps that can retrieve, record chart, and publish weather data from various weather sensors and also from GPIO or SDR; pywws, weewx
> weewx: https://github.com/weewx/weewx
> A WebSDR LuCI app would be cool.
What are some other interesting applications for [digital] terrestrial radio (in service of bolstering support for amateur radio)?
What could K12cs "Q12" STEM science classes do to encourage learning of this and adjacent EM skills?
"Listen to HD radio with a $30 RTL SDR dongle" https://news.ycombinator.com/item?id=38157466
New centralized pollination portal for better global bee data creates a buzz
Is it possible to create a lawn weed killer (a broadleaf herbicide) that doesn't kill white dutch clover; because bees eat clover (and dandelions) and bees are essential?
"Tire dust makes up the majority of ocean microplastics" (2023) https://news.ycombinator.com/item?id=37728005 :
> "Rubber Made From Dandelions is Making Tires More Sustainable – Truly a Wondrous Plant" (2021) https://www.goodnewsnetwork.org/dandelions-produce-more-sust...
Electrical switching of the edge current chirality in quantum Hall insulators
"Electrical switching of the edge current chirality in quantum anomalous Hall insulators" (2023) https://www.nature.com/articles/s41563-023-01694-y :
> A quantum anomalous Hall (QAH) insulator is a topological phase in which the interior is insulating but electrical current flows along the edges of the sample in either a clockwise or counterclockwise direction, as dictated by the spontaneous magnetization orientation. Such a chiral edge current eliminates any backscattering, giving rise to quantized Hall resistance and zero longitudinal resistance. Here we fabricate mesoscopic QAH sandwich Hall bar devices and succeed in switching the edge current chirality through thermally assisted spin–orbit torque (SOT). The well-quantized QAH states before and after SOT switching with opposite edge current chiralities are demonstrated through four- and three-terminal measurements. We show that the SOT responsible for magnetization switching can be generated by both surface and bulk carriers. Our results further our understanding of the interplay between magnetism and topological states and usher in an easy and instantaneous method to manipulate the QAH state.
"Researchers Simplify Switching for Quantum Electronics" (2023) https://spectrum.ieee.org/amp/hall-effect-2666062907 :
> “Achieving instantaneous electrical control over the edge current chirality [direction] in QAH materials, without the need for sweeping the external magnetic field, is indispensable for the advancement of QAH-based computation and information technologies,” he said.
> [...] Finding ways to to exploit these dissipation-less “chiral edge currents,” as they are known, could have far-ranging applications in quantum metrology, spintronics, and topological quantum computing. The idea was given a boost by the discovery that thin films of magnetic materials exhibit similar behavior without the need for a strong external magnetic field—something known as the quantum anomalous Hall effect (QAH)—which makes building electronic devices that harness the phenomenon much more practical.
One stumbling block has been that switching the direction of these edge currents—a crucial step in many information-processing tasks—could be done only by passing an external magnetic field over the material. Now, researchers at Penn State University have demonstrated for the first time that they can switch the direction by simply applying a pulse of current.
Quantum anomalous Hall effect: https://en.wikipedia.org/wiki/Quantum_anomalous_Hall_effect
Show HN: MicroLua – Lua for the RP2040 Microcontroller
MicroLua allows programming the RP2040 microcontroller in Lua. It packages the latest Lua interpreter with bindings for the Pico SDK and a cooperative threading library.
MicroLua is licensed under the MIT license.
I wanted to learn about Lua and about the RP2040 microcontroller. This is the result :)
OpenWRT's LuCI WebUI, Torch ML, and embedded interpreters in game engines are also written in Lua.
Apache Arrow's C GLib implementation works with Lua. From https://news.ycombinator.com/item?id=38103326 :
> Apache Arrow already supports C, C++, Python, Rust, Go and has C GLib support Lua: https://github.com/apache/arrow/tree/main/c_glib/example/lua
LearnXinYminutes Lua: https://learnxinyminutes.com/docs/lua/
OpenWRT is a Make-based Linux distro for embedded devices with limited RAM and flash ROM, x86, and docker. OpenWRT is built on `uci` (and procd and ubusd instead of systemd and dbus). UCI is an /etc/config/* dotted.key=value configuration system which procd sys-v /etc/init.d/* scripts read values in from when regenerating their configuration when $1 is e.g. 'start', 'restart', or 'reload' per sys-v. LuCI is the OpenWRT web UI which is written in Lua; which is now implemented mostly as a JSON-RPC API instead of with server-side HTML templates for usability and performance on embedded devices.
Notes on how to write a LuCI app in Lua: https://github.com/x-wrt/luci/commit/73cda4f4a0115bb05bbd3d1...
applications/luci-app-example: https://github.com/openwrt/luci/tree/master/applications/luc...
openwrt/luci//docs: https://github.com/openwrt/luci/tree/master/docs
https://openwrt.org/supported_devices
It's probably impossible to build OpenWRT (and opkg packages) for an RP2040W.
Yes, it's impossible (i'm dev on OpenWRT) because it build toolchain first then, Linux kernel, drives and userland apps.
> probably impossible
https://github.com/raspberrypi/pico-sdk/ links to a PDF about connecting to the interwebs with a pi pico: "Connecting to the Internet with Raspberry Pi Pico W" https://rptl.io/picow-connect
micropython/micropython//ports/rp2/boards/RPI_PICO_W: https://github.com/micropython/micropython/tree/master/ports...
micropython/micropython//lib: https://github.com/micropython/micropython/blob/master/lib
micropython/micropython//examples/network/http_server_simplistic.py: https://github.com/micropython/micropython/blob/master/examp...
micropython/micropython//examples/network/http_server_ssl.py: https://github.com/micropython/micropython/blob/master/examp...
raspberrypi/pico-sdk//lib: btstack, cyw43-driver, lwip, mbedtls, tinyusb https://github.com/raspberrypi/pico-sdk/tree/master/lib
raspberrypi/pico-examples//pico_w/wifi/access_point/picow_access_point.c: https://github.com/raspberrypi/pico-examples/blob/master/pic...
There's an iperf opkg pkg, or is it just netperf (which works with the flent CLI and GUI)?
raspberrypi/pico-examples//pico_w/wifi/iperf/picow_iperf.c: https://github.com/raspberrypi/pico-examples/blob/master/pic...
raspberrypi/pico-examples//pico_w/wifi/freertos/iperf/picow_iperf.c: https://github.com/raspberrypi/pico-examples/blob/master/pic...
FreeRTOS > Process management: https://en.wikipedia.org/wiki/FreeRTOS#Process_managememt
elf2uf2: https://github.com/raspberrypi/pico-sdk/tree/master/tools/el...
adafruit/circuitpython//tests/micropython: https://github.com/adafruit/circuitpython/tree/main/tests/mi...
adafruit/circuitpython//tools: https://github.com/adafruit/circuitpython/tree/main/tools
adafruit/circuitpython//tools/cortex-m-fault-gdb.py: https://github.com/adafruit/circuitpython/blob/main/tools/co...
RP2040 > Features: 2x ARM Cortex-M0 https://en.wikipedia.org/wiki/RP2040#Features
Pix2tex: Using a ViT to convert images of equations into LaTeX code
From "STEM formulas" https://news.ycombinator.com/item?id=36839748 :
> latex2sympy parses LaTeX and generates SymPy symbolic CAS Python code (w/ ANTLR) and is now merged in SymPy core but you must install ANTLR before because it's an optional dependency. Then, sympy.lambdify will compile a symbolic expression for use with TODO JAX, TensorFlow, PyTorch,.
mamba install -c conda-forge sympy antlr # pytorch tensorflow jax # jupyterlab jupyter_console
https://news.ycombinator.com/item?id=36159017 : sympy.utilities.lambdify.lambdify() , sympytorch, sympy2jaxBut then add tests! Tests for LaTeX equations that had never been executable as code.
There are a number of ways to generate tests for functions and methods with and without parameter and return types.
Property-based testing is one way to auto-generate test cases.
Property testing: https://en.wikipedia.org/wiki/Property_testing
awesome-python-testing#property-based-testing: https://github.com/cleder/awesome-python-testing#property-ba...
https://github.com/HypothesisWorks/hypothesis :
> Hypothesis is a family of testing libraries which let you write tests parametrized by a source of examples. A Hypothesis implementation then generates simple and comprehensible examples that make your tests fail. This simplifies writing your tests and makes them more powerful at the same time, by letting software automate the boring bits and do them to a higher standard than a human would, freeing you to focus on the higher level test logic.
> This sort of testing is often called "property-based testing", and the most widely known implementation of the concept is the Haskell library QuickCheck, but Hypothesis differs significantly from QuickCheck and is designed to fit idiomatically and easily into existing styles of testing that you are used to, with absolutely no familiarity with Haskell or functional programming needed.
Fuzzing is another way to auto-generate tests and test cases; by testing combinations of function parameters as a traversal through a combinatorial graph.
Fuzzing: https://en.wikipedia.org/wiki/Fuzzing
Google/atheris is based on libFuzzer: https://github.com/google/atheris
Clusterfuzz supports libFuzzer and APFL: https://google.github.io/clusterfuzz/setting-up-fuzzing/libf...
From "curl: add --ca-native and --proxy-ca-native" https://github.com/curl/curl/pull/11049#issuecomment-1528118... :
> It looks like according to their CHANGES for OpenSSL 3.1 they've added SSL_CERT_URI and for OpenSSL 3.2 they've added SSL_CERT_PATH and are going to deprecate SSL_CERT_DIR (which could do both but had some parsing problem, still I don't get why they would deprecate it for paths). [...]
> curl reads SSL_CERT_DIR (note it's ignored for [Schannel,]) and sets that as the path. I don't know if OpenSSL is now reading the environment itself but the URI is org.openssl.winstore:// not capieng. If you have a master build then try SSL_CERT_URI=org.openssl.winstore:// curl ... and if that doesn't work try curl --capath "org.openssl.winstore://" ...